00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 1998 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3259 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.071 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.086 Fetching changes from the remote Git repository 00:00:00.088 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.106 Using shallow fetch with depth 1 00:00:00.106 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.106 > git --version # timeout=10 00:00:00.125 > git --version # 'git version 2.39.2' 00:00:00.125 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.144 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.144 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.883 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.895 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.910 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:03.910 > git config core.sparsecheckout # timeout=10 00:00:03.924 > git read-tree -mu HEAD # timeout=10 00:00:03.940 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:03.960 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:03.960 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:04.048 [Pipeline] Start of Pipeline 00:00:04.063 [Pipeline] library 00:00:04.065 Loading library shm_lib@master 00:00:04.065 Library shm_lib@master is cached. Copying from home. 00:00:04.082 [Pipeline] node 00:00:04.094 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:04.096 [Pipeline] { 00:00:04.108 [Pipeline] catchError 00:00:04.109 [Pipeline] { 00:00:04.124 [Pipeline] wrap 00:00:04.134 [Pipeline] { 00:00:04.143 [Pipeline] stage 00:00:04.145 [Pipeline] { (Prologue) 00:00:04.162 [Pipeline] echo 00:00:04.163 Node: VM-host-SM0 00:00:04.168 [Pipeline] cleanWs 00:00:04.176 [WS-CLEANUP] Deleting project workspace... 00:00:04.176 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.182 [WS-CLEANUP] done 00:00:04.377 [Pipeline] setCustomBuildProperty 00:00:04.449 [Pipeline] httpRequest 00:00:04.465 [Pipeline] echo 00:00:04.466 Sorcerer 10.211.164.101 is alive 00:00:04.473 [Pipeline] httpRequest 00:00:04.476 HttpMethod: GET 00:00:04.476 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:04.477 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:04.477 Response Code: HTTP/1.1 200 OK 00:00:04.478 Success: Status code 200 is in the accepted range: 200,404 00:00:04.478 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:05.062 [Pipeline] sh 00:00:05.347 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:05.359 [Pipeline] httpRequest 00:00:05.375 [Pipeline] echo 00:00:05.376 Sorcerer 10.211.164.101 is alive 00:00:05.385 [Pipeline] httpRequest 00:00:05.388 HttpMethod: GET 00:00:05.389 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:05.389 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:05.402 Response Code: HTTP/1.1 200 OK 00:00:05.402 Success: Status code 200 is in the accepted range: 200,404 00:00:05.403 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:44.976 [Pipeline] sh 00:00:45.254 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:48.546 [Pipeline] sh 00:00:48.826 + git -C spdk log --oneline -n5 00:00:48.826 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:48.826 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:48.826 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:48.826 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:48.826 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:00:48.850 [Pipeline] writeFile 00:00:48.868 [Pipeline] sh 00:00:49.148 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:49.160 [Pipeline] sh 00:00:49.440 + cat autorun-spdk.conf 00:00:49.440 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.440 SPDK_TEST_NVMF=1 00:00:49.440 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.440 SPDK_TEST_VFIOUSER=1 00:00:49.440 SPDK_TEST_USDT=1 00:00:49.440 SPDK_RUN_UBSAN=1 00:00:49.440 SPDK_TEST_NVMF_MDNS=1 00:00:49.440 NET_TYPE=virt 00:00:49.440 SPDK_JSONRPC_GO_CLIENT=1 00:00:49.440 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:49.446 RUN_NIGHTLY=1 00:00:49.448 [Pipeline] } 00:00:49.466 [Pipeline] // stage 00:00:49.484 [Pipeline] stage 00:00:49.487 [Pipeline] { (Run VM) 00:00:49.503 [Pipeline] sh 00:00:49.782 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:49.783 + echo 'Start stage prepare_nvme.sh' 00:00:49.783 Start stage prepare_nvme.sh 00:00:49.783 + [[ -n 4 ]] 00:00:49.783 + disk_prefix=ex4 00:00:49.783 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:00:49.783 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:00:49.783 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:00:49.783 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.783 ++ SPDK_TEST_NVMF=1 00:00:49.783 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.783 ++ SPDK_TEST_VFIOUSER=1 00:00:49.783 ++ SPDK_TEST_USDT=1 00:00:49.783 ++ SPDK_RUN_UBSAN=1 00:00:49.783 ++ SPDK_TEST_NVMF_MDNS=1 00:00:49.783 ++ NET_TYPE=virt 00:00:49.783 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:49.783 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:49.783 ++ RUN_NIGHTLY=1 00:00:49.783 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:49.783 + nvme_files=() 00:00:49.783 + declare -A nvme_files 00:00:49.783 + backend_dir=/var/lib/libvirt/images/backends 00:00:49.783 + nvme_files['nvme.img']=5G 00:00:49.783 + nvme_files['nvme-cmb.img']=5G 00:00:49.783 + nvme_files['nvme-multi0.img']=4G 00:00:49.783 + nvme_files['nvme-multi1.img']=4G 00:00:49.783 + nvme_files['nvme-multi2.img']=4G 00:00:49.783 + nvme_files['nvme-openstack.img']=8G 00:00:49.783 + nvme_files['nvme-zns.img']=5G 00:00:49.783 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:49.783 + (( SPDK_TEST_FTL == 1 )) 00:00:49.783 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:49.783 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:49.783 + for nvme in "${!nvme_files[@]}" 00:00:49.783 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:49.783 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:49.783 + for nvme in "${!nvme_files[@]}" 00:00:49.783 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:49.783 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:49.783 + for nvme in "${!nvme_files[@]}" 00:00:49.783 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:49.783 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:49.783 + for nvme in "${!nvme_files[@]}" 00:00:49.783 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:49.783 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:49.783 + for nvme in "${!nvme_files[@]}" 00:00:49.783 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:49.783 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:49.783 + for nvme in "${!nvme_files[@]}" 00:00:49.783 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:49.783 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:49.783 + for nvme in "${!nvme_files[@]}" 00:00:49.783 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:50.039 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:50.039 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:50.039 + echo 'End stage prepare_nvme.sh' 00:00:50.039 End stage prepare_nvme.sh 00:00:50.051 [Pipeline] sh 00:00:50.329 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:50.329 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:00:50.329 00:00:50.329 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:00:50.329 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:00:50.329 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:50.329 HELP=0 00:00:50.329 DRY_RUN=0 00:00:50.329 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:00:50.329 NVME_DISKS_TYPE=nvme,nvme, 00:00:50.329 NVME_AUTO_CREATE=0 00:00:50.329 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:00:50.329 NVME_CMB=,, 00:00:50.329 NVME_PMR=,, 00:00:50.329 NVME_ZNS=,, 00:00:50.329 NVME_MS=,, 00:00:50.329 NVME_FDP=,, 00:00:50.329 SPDK_VAGRANT_DISTRO=fedora38 00:00:50.329 SPDK_VAGRANT_VMCPU=10 00:00:50.329 SPDK_VAGRANT_VMRAM=12288 00:00:50.329 SPDK_VAGRANT_PROVIDER=libvirt 00:00:50.330 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:50.330 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:50.330 SPDK_OPENSTACK_NETWORK=0 00:00:50.330 VAGRANT_PACKAGE_BOX=0 00:00:50.330 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:50.330 FORCE_DISTRO=true 00:00:50.330 VAGRANT_BOX_VERSION= 00:00:50.330 EXTRA_VAGRANTFILES= 00:00:50.330 NIC_MODEL=e1000 00:00:50.330 00:00:50.330 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:00:50.330 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:52.861 Bringing machine 'default' up with 'libvirt' provider... 00:00:53.798 ==> default: Creating image (snapshot of base box volume). 00:00:54.058 ==> default: Creating domain with the following settings... 00:00:54.058 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720680817_9a6246e3ee1ca10fb9e6 00:00:54.058 ==> default: -- Domain type: kvm 00:00:54.058 ==> default: -- Cpus: 10 00:00:54.058 ==> default: -- Feature: acpi 00:00:54.058 ==> default: -- Feature: apic 00:00:54.058 ==> default: -- Feature: pae 00:00:54.058 ==> default: -- Memory: 12288M 00:00:54.058 ==> default: -- Memory Backing: hugepages: 00:00:54.058 ==> default: -- Management MAC: 00:00:54.058 ==> default: -- Loader: 00:00:54.058 ==> default: -- Nvram: 00:00:54.058 ==> default: -- Base box: spdk/fedora38 00:00:54.058 ==> default: -- Storage pool: default 00:00:54.058 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720680817_9a6246e3ee1ca10fb9e6.img (20G) 00:00:54.058 ==> default: -- Volume Cache: default 00:00:54.058 ==> default: -- Kernel: 00:00:54.058 ==> default: -- Initrd: 00:00:54.058 ==> default: -- Graphics Type: vnc 00:00:54.058 ==> default: -- Graphics Port: -1 00:00:54.058 ==> default: -- Graphics IP: 127.0.0.1 00:00:54.058 ==> default: -- Graphics Password: Not defined 00:00:54.058 ==> default: -- Video Type: cirrus 00:00:54.058 ==> default: -- Video VRAM: 9216 00:00:54.058 ==> default: -- Sound Type: 00:00:54.058 ==> default: -- Keymap: en-us 00:00:54.058 ==> default: -- TPM Path: 00:00:54.058 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:54.058 ==> default: -- Command line args: 00:00:54.058 ==> default: -> value=-device, 00:00:54.058 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:54.058 ==> default: -> value=-drive, 00:00:54.058 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:54.058 ==> default: -> value=-device, 00:00:54.058 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.058 ==> default: -> value=-device, 00:00:54.058 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:00:54.058 ==> default: -> value=-drive, 00:00:54.058 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:54.058 ==> default: -> value=-device, 00:00:54.058 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.058 ==> default: -> value=-drive, 00:00:54.058 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:54.058 ==> default: -> value=-device, 00:00:54.058 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.058 ==> default: -> value=-drive, 00:00:54.058 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:54.058 ==> default: -> value=-device, 00:00:54.058 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.058 ==> default: Creating shared folders metadata... 00:00:54.317 ==> default: Starting domain. 00:00:55.695 ==> default: Waiting for domain to get an IP address... 00:01:13.779 ==> default: Waiting for SSH to become available... 00:01:13.779 ==> default: Configuring and enabling network interfaces... 00:01:17.067 default: SSH address: 192.168.121.41:22 00:01:17.067 default: SSH username: vagrant 00:01:17.067 default: SSH auth method: private key 00:01:19.043 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:27.234 ==> default: Mounting SSHFS shared folder... 00:01:28.168 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:28.168 ==> default: Checking Mount.. 00:01:29.102 ==> default: Folder Successfully Mounted! 00:01:29.102 ==> default: Running provisioner: file... 00:01:30.036 default: ~/.gitconfig => .gitconfig 00:01:30.602 00:01:30.602 SUCCESS! 00:01:30.602 00:01:30.602 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:30.602 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:30.602 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:30.602 00:01:30.611 [Pipeline] } 00:01:30.630 [Pipeline] // stage 00:01:30.638 [Pipeline] dir 00:01:30.639 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:01:30.640 [Pipeline] { 00:01:30.655 [Pipeline] catchError 00:01:30.656 [Pipeline] { 00:01:30.670 [Pipeline] sh 00:01:30.948 + vagrant ssh-config --host vagrant 00:01:30.948 + sed -ne /^Host/,$p 00:01:30.948 + tee ssh_conf 00:01:34.228 Host vagrant 00:01:34.228 HostName 192.168.121.41 00:01:34.228 User vagrant 00:01:34.228 Port 22 00:01:34.228 UserKnownHostsFile /dev/null 00:01:34.228 StrictHostKeyChecking no 00:01:34.228 PasswordAuthentication no 00:01:34.228 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:34.228 IdentitiesOnly yes 00:01:34.228 LogLevel FATAL 00:01:34.228 ForwardAgent yes 00:01:34.228 ForwardX11 yes 00:01:34.228 00:01:34.246 [Pipeline] withEnv 00:01:34.249 [Pipeline] { 00:01:34.268 [Pipeline] sh 00:01:34.548 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:34.548 source /etc/os-release 00:01:34.548 [[ -e /image.version ]] && img=$(< /image.version) 00:01:34.548 # Minimal, systemd-like check. 00:01:34.548 if [[ -e /.dockerenv ]]; then 00:01:34.548 # Clear garbage from the node's name: 00:01:34.548 # agt-er_autotest_547-896 -> autotest_547-896 00:01:34.548 # $HOSTNAME is the actual container id 00:01:34.548 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:34.548 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:34.548 # We can assume this is a mount from a host where container is running, 00:01:34.548 # so fetch its hostname to easily identify the target swarm worker. 00:01:34.548 container="$(< /etc/hostname) ($agent)" 00:01:34.548 else 00:01:34.548 # Fallback 00:01:34.548 container=$agent 00:01:34.548 fi 00:01:34.548 fi 00:01:34.548 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:34.548 00:01:34.561 [Pipeline] } 00:01:34.581 [Pipeline] // withEnv 00:01:34.592 [Pipeline] setCustomBuildProperty 00:01:34.612 [Pipeline] stage 00:01:34.615 [Pipeline] { (Tests) 00:01:34.639 [Pipeline] sh 00:01:34.918 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:35.192 [Pipeline] sh 00:01:35.472 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:35.745 [Pipeline] timeout 00:01:35.746 Timeout set to expire in 40 min 00:01:35.748 [Pipeline] { 00:01:35.768 [Pipeline] sh 00:01:36.048 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:36.616 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:36.629 [Pipeline] sh 00:01:36.908 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:37.181 [Pipeline] sh 00:01:37.460 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:37.736 [Pipeline] sh 00:01:38.015 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:38.275 ++ readlink -f spdk_repo 00:01:38.275 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:38.275 + [[ -n /home/vagrant/spdk_repo ]] 00:01:38.275 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:38.275 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:38.275 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:38.275 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:38.275 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:38.275 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:38.275 + cd /home/vagrant/spdk_repo 00:01:38.275 + source /etc/os-release 00:01:38.275 ++ NAME='Fedora Linux' 00:01:38.275 ++ VERSION='38 (Cloud Edition)' 00:01:38.275 ++ ID=fedora 00:01:38.275 ++ VERSION_ID=38 00:01:38.275 ++ VERSION_CODENAME= 00:01:38.275 ++ PLATFORM_ID=platform:f38 00:01:38.275 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:38.275 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:38.275 ++ LOGO=fedora-logo-icon 00:01:38.275 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:38.275 ++ HOME_URL=https://fedoraproject.org/ 00:01:38.275 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:38.275 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:38.275 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:38.275 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:38.275 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:38.275 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:38.275 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:38.275 ++ SUPPORT_END=2024-05-14 00:01:38.275 ++ VARIANT='Cloud Edition' 00:01:38.275 ++ VARIANT_ID=cloud 00:01:38.275 + uname -a 00:01:38.275 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:38.275 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:38.275 Hugepages 00:01:38.275 node hugesize free / total 00:01:38.275 node0 1048576kB 0 / 0 00:01:38.275 node0 2048kB 0 / 0 00:01:38.275 00:01:38.275 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:38.275 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:38.275 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:38.275 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:38.275 + rm -f /tmp/spdk-ld-path 00:01:38.275 + source autorun-spdk.conf 00:01:38.275 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.275 ++ SPDK_TEST_NVMF=1 00:01:38.275 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.275 ++ SPDK_TEST_VFIOUSER=1 00:01:38.275 ++ SPDK_TEST_USDT=1 00:01:38.275 ++ SPDK_RUN_UBSAN=1 00:01:38.275 ++ SPDK_TEST_NVMF_MDNS=1 00:01:38.275 ++ NET_TYPE=virt 00:01:38.275 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:38.275 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.275 ++ RUN_NIGHTLY=1 00:01:38.275 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:38.275 + [[ -n '' ]] 00:01:38.275 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:38.557 + for M in /var/spdk/build-*-manifest.txt 00:01:38.557 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:38.557 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.557 + for M in /var/spdk/build-*-manifest.txt 00:01:38.557 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:38.557 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.557 ++ uname 00:01:38.557 + [[ Linux == \L\i\n\u\x ]] 00:01:38.557 + sudo dmesg -T 00:01:38.557 + sudo dmesg --clear 00:01:38.557 + dmesg_pid=5125 00:01:38.557 + [[ Fedora Linux == FreeBSD ]] 00:01:38.557 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.557 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.557 + sudo dmesg -Tw 00:01:38.557 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:38.557 + [[ -x /usr/src/fio-static/fio ]] 00:01:38.557 + export FIO_BIN=/usr/src/fio-static/fio 00:01:38.557 + FIO_BIN=/usr/src/fio-static/fio 00:01:38.557 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:38.557 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:38.557 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:38.557 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.557 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.557 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:38.557 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.557 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.557 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:38.557 Test configuration: 00:01:38.557 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.557 SPDK_TEST_NVMF=1 00:01:38.557 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.557 SPDK_TEST_VFIOUSER=1 00:01:38.557 SPDK_TEST_USDT=1 00:01:38.557 SPDK_RUN_UBSAN=1 00:01:38.557 SPDK_TEST_NVMF_MDNS=1 00:01:38.557 NET_TYPE=virt 00:01:38.557 SPDK_JSONRPC_GO_CLIENT=1 00:01:38.557 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.557 RUN_NIGHTLY=1 06:54:22 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:38.557 06:54:22 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:38.557 06:54:22 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:38.557 06:54:22 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:38.557 06:54:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.557 06:54:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.557 06:54:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.557 06:54:22 -- paths/export.sh@5 -- $ export PATH 00:01:38.557 06:54:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.557 06:54:22 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:38.557 06:54:22 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:38.557 06:54:22 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720680862.XXXXXX 00:01:38.557 06:54:22 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720680862.Ztpyjf 00:01:38.557 06:54:22 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:38.557 06:54:22 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:38.557 06:54:22 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:38.557 06:54:22 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:38.557 06:54:22 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:38.557 06:54:22 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:38.557 06:54:22 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:38.557 06:54:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.557 06:54:22 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:01:38.557 06:54:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:38.557 06:54:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:38.557 06:54:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:38.557 06:54:22 -- spdk/autobuild.sh@16 -- $ date -u 00:01:38.557 Thu Jul 11 06:54:22 AM UTC 2024 00:01:38.557 06:54:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:38.557 LTS-59-g4b94202c6 00:01:38.557 06:54:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:38.557 06:54:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:38.557 06:54:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:38.557 06:54:22 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:38.557 06:54:22 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:38.557 06:54:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.557 ************************************ 00:01:38.557 START TEST ubsan 00:01:38.557 ************************************ 00:01:38.557 using ubsan 00:01:38.557 06:54:22 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:38.557 00:01:38.557 real 0m0.000s 00:01:38.557 user 0m0.000s 00:01:38.557 sys 0m0.000s 00:01:38.557 06:54:22 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:38.557 ************************************ 00:01:38.557 END TEST ubsan 00:01:38.557 ************************************ 00:01:38.557 06:54:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.831 06:54:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:38.831 06:54:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:38.831 06:54:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:38.831 06:54:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:38.831 06:54:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:38.831 06:54:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:38.831 06:54:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:38.831 06:54:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:38.831 06:54:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:01:38.831 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:38.831 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:39.400 Using 'verbs' RDMA provider 00:01:54.848 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:07.048 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:07.048 go version go1.21.1 linux/amd64 00:02:07.048 Creating mk/config.mk...done. 00:02:07.048 Creating mk/cc.flags.mk...done. 00:02:07.048 Type 'make' to build. 00:02:07.048 06:54:49 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:07.048 06:54:49 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:07.048 06:54:49 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:07.048 06:54:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.048 ************************************ 00:02:07.048 START TEST make 00:02:07.048 ************************************ 00:02:07.048 06:54:49 -- common/autotest_common.sh@1104 -- $ make -j10 00:02:07.048 make[1]: Nothing to be done for 'all'. 00:02:07.615 The Meson build system 00:02:07.615 Version: 1.3.1 00:02:07.615 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:07.615 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:07.615 Build type: native build 00:02:07.615 Project name: libvfio-user 00:02:07.615 Project version: 0.0.1 00:02:07.615 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:07.615 C linker for the host machine: cc ld.bfd 2.39-16 00:02:07.615 Host machine cpu family: x86_64 00:02:07.615 Host machine cpu: x86_64 00:02:07.615 Run-time dependency threads found: YES 00:02:07.615 Library dl found: YES 00:02:07.615 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:07.615 Run-time dependency json-c found: YES 0.17 00:02:07.615 Run-time dependency cmocka found: YES 1.1.7 00:02:07.615 Program pytest-3 found: NO 00:02:07.615 Program flake8 found: NO 00:02:07.615 Program misspell-fixer found: NO 00:02:07.615 Program restructuredtext-lint found: NO 00:02:07.615 Program valgrind found: YES (/usr/bin/valgrind) 00:02:07.615 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:07.615 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:07.615 Compiler for C supports arguments -Wwrite-strings: YES 00:02:07.615 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:07.615 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:07.615 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:07.615 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:07.615 Build targets in project: 8 00:02:07.615 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:07.615 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:07.615 00:02:07.615 libvfio-user 0.0.1 00:02:07.615 00:02:07.615 User defined options 00:02:07.615 buildtype : debug 00:02:07.615 default_library: shared 00:02:07.615 libdir : /usr/local/lib 00:02:07.615 00:02:07.615 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.186 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:08.186 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:08.186 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:08.186 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:08.186 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:08.186 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:08.186 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:08.186 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:08.186 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:08.444 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:08.444 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:08.444 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:08.444 [12/37] Compiling C object samples/null.p/null.c.o 00:02:08.444 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:08.444 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:08.444 [15/37] Compiling C object samples/server.p/server.c.o 00:02:08.444 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:08.444 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:08.444 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:08.444 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:08.444 [20/37] Compiling C object samples/client.p/client.c.o 00:02:08.444 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:08.444 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:08.444 [23/37] Linking target lib/libvfio-user.so.0.0.1 00:02:08.444 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:08.444 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:08.444 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:08.444 [27/37] Linking target samples/client 00:02:08.703 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:08.703 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:08.703 [30/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:08.703 [31/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:08.703 [32/37] Linking target samples/lspci 00:02:08.703 [33/37] Linking target samples/server 00:02:08.703 [34/37] Linking target samples/null 00:02:08.703 [35/37] Linking target samples/gpio-pci-idio-16 00:02:08.703 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:08.703 [37/37] Linking target test/unit_tests 00:02:08.961 INFO: autodetecting backend as ninja 00:02:08.961 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:08.961 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:09.219 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:09.219 ninja: no work to do. 00:02:17.323 The Meson build system 00:02:17.323 Version: 1.3.1 00:02:17.323 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:17.323 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:17.323 Build type: native build 00:02:17.323 Program cat found: YES (/usr/bin/cat) 00:02:17.323 Project name: DPDK 00:02:17.323 Project version: 23.11.0 00:02:17.323 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:17.323 C linker for the host machine: cc ld.bfd 2.39-16 00:02:17.323 Host machine cpu family: x86_64 00:02:17.323 Host machine cpu: x86_64 00:02:17.323 Message: ## Building in Developer Mode ## 00:02:17.323 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:17.323 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:17.323 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:17.323 Program python3 found: YES (/usr/bin/python3) 00:02:17.323 Program cat found: YES (/usr/bin/cat) 00:02:17.323 Compiler for C supports arguments -march=native: YES 00:02:17.323 Checking for size of "void *" : 8 00:02:17.323 Checking for size of "void *" : 8 (cached) 00:02:17.323 Library m found: YES 00:02:17.323 Library numa found: YES 00:02:17.323 Has header "numaif.h" : YES 00:02:17.323 Library fdt found: NO 00:02:17.323 Library execinfo found: NO 00:02:17.323 Has header "execinfo.h" : YES 00:02:17.323 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:17.323 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:17.323 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:17.323 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:17.323 Run-time dependency openssl found: YES 3.0.9 00:02:17.323 Run-time dependency libpcap found: YES 1.10.4 00:02:17.323 Has header "pcap.h" with dependency libpcap: YES 00:02:17.323 Compiler for C supports arguments -Wcast-qual: YES 00:02:17.323 Compiler for C supports arguments -Wdeprecated: YES 00:02:17.323 Compiler for C supports arguments -Wformat: YES 00:02:17.323 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:17.323 Compiler for C supports arguments -Wformat-security: NO 00:02:17.323 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:17.323 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:17.323 Compiler for C supports arguments -Wnested-externs: YES 00:02:17.323 Compiler for C supports arguments -Wold-style-definition: YES 00:02:17.323 Compiler for C supports arguments -Wpointer-arith: YES 00:02:17.323 Compiler for C supports arguments -Wsign-compare: YES 00:02:17.323 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:17.323 Compiler for C supports arguments -Wundef: YES 00:02:17.323 Compiler for C supports arguments -Wwrite-strings: YES 00:02:17.323 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:17.323 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:17.323 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:17.323 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:17.323 Program objdump found: YES (/usr/bin/objdump) 00:02:17.323 Compiler for C supports arguments -mavx512f: YES 00:02:17.323 Checking if "AVX512 checking" compiles: YES 00:02:17.323 Fetching value of define "__SSE4_2__" : 1 00:02:17.323 Fetching value of define "__AES__" : 1 00:02:17.323 Fetching value of define "__AVX__" : 1 00:02:17.323 Fetching value of define "__AVX2__" : 1 00:02:17.323 Fetching value of define "__AVX512BW__" : (undefined) 00:02:17.323 Fetching value of define "__AVX512CD__" : (undefined) 00:02:17.323 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:17.323 Fetching value of define "__AVX512F__" : (undefined) 00:02:17.323 Fetching value of define "__AVX512VL__" : (undefined) 00:02:17.323 Fetching value of define "__PCLMUL__" : 1 00:02:17.323 Fetching value of define "__RDRND__" : 1 00:02:17.323 Fetching value of define "__RDSEED__" : 1 00:02:17.323 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:17.323 Fetching value of define "__znver1__" : (undefined) 00:02:17.323 Fetching value of define "__znver2__" : (undefined) 00:02:17.323 Fetching value of define "__znver3__" : (undefined) 00:02:17.323 Fetching value of define "__znver4__" : (undefined) 00:02:17.323 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:17.323 Message: lib/log: Defining dependency "log" 00:02:17.323 Message: lib/kvargs: Defining dependency "kvargs" 00:02:17.323 Message: lib/telemetry: Defining dependency "telemetry" 00:02:17.323 Checking for function "getentropy" : NO 00:02:17.323 Message: lib/eal: Defining dependency "eal" 00:02:17.323 Message: lib/ring: Defining dependency "ring" 00:02:17.323 Message: lib/rcu: Defining dependency "rcu" 00:02:17.323 Message: lib/mempool: Defining dependency "mempool" 00:02:17.323 Message: lib/mbuf: Defining dependency "mbuf" 00:02:17.323 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:17.323 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:17.323 Compiler for C supports arguments -mpclmul: YES 00:02:17.323 Compiler for C supports arguments -maes: YES 00:02:17.323 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.323 Compiler for C supports arguments -mavx512bw: YES 00:02:17.323 Compiler for C supports arguments -mavx512dq: YES 00:02:17.323 Compiler for C supports arguments -mavx512vl: YES 00:02:17.323 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:17.323 Compiler for C supports arguments -mavx2: YES 00:02:17.323 Compiler for C supports arguments -mavx: YES 00:02:17.323 Message: lib/net: Defining dependency "net" 00:02:17.323 Message: lib/meter: Defining dependency "meter" 00:02:17.323 Message: lib/ethdev: Defining dependency "ethdev" 00:02:17.323 Message: lib/pci: Defining dependency "pci" 00:02:17.323 Message: lib/cmdline: Defining dependency "cmdline" 00:02:17.323 Message: lib/hash: Defining dependency "hash" 00:02:17.323 Message: lib/timer: Defining dependency "timer" 00:02:17.323 Message: lib/compressdev: Defining dependency "compressdev" 00:02:17.323 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:17.323 Message: lib/dmadev: Defining dependency "dmadev" 00:02:17.323 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:17.323 Message: lib/power: Defining dependency "power" 00:02:17.323 Message: lib/reorder: Defining dependency "reorder" 00:02:17.323 Message: lib/security: Defining dependency "security" 00:02:17.323 Has header "linux/userfaultfd.h" : YES 00:02:17.323 Has header "linux/vduse.h" : YES 00:02:17.323 Message: lib/vhost: Defining dependency "vhost" 00:02:17.324 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:17.324 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:17.324 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:17.324 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:17.324 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:17.324 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:17.324 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:17.324 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:17.324 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:17.324 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:17.324 Program doxygen found: YES (/usr/bin/doxygen) 00:02:17.324 Configuring doxy-api-html.conf using configuration 00:02:17.324 Configuring doxy-api-man.conf using configuration 00:02:17.324 Program mandb found: YES (/usr/bin/mandb) 00:02:17.324 Program sphinx-build found: NO 00:02:17.324 Configuring rte_build_config.h using configuration 00:02:17.324 Message: 00:02:17.324 ================= 00:02:17.324 Applications Enabled 00:02:17.324 ================= 00:02:17.324 00:02:17.324 apps: 00:02:17.324 00:02:17.324 00:02:17.324 Message: 00:02:17.324 ================= 00:02:17.324 Libraries Enabled 00:02:17.324 ================= 00:02:17.324 00:02:17.324 libs: 00:02:17.324 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:17.324 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:17.324 cryptodev, dmadev, power, reorder, security, vhost, 00:02:17.324 00:02:17.324 Message: 00:02:17.324 =============== 00:02:17.324 Drivers Enabled 00:02:17.324 =============== 00:02:17.324 00:02:17.324 common: 00:02:17.324 00:02:17.324 bus: 00:02:17.324 pci, vdev, 00:02:17.324 mempool: 00:02:17.324 ring, 00:02:17.324 dma: 00:02:17.324 00:02:17.324 net: 00:02:17.324 00:02:17.324 crypto: 00:02:17.324 00:02:17.324 compress: 00:02:17.324 00:02:17.324 vdpa: 00:02:17.324 00:02:17.324 00:02:17.324 Message: 00:02:17.324 ================= 00:02:17.324 Content Skipped 00:02:17.324 ================= 00:02:17.324 00:02:17.324 apps: 00:02:17.324 dumpcap: explicitly disabled via build config 00:02:17.324 graph: explicitly disabled via build config 00:02:17.324 pdump: explicitly disabled via build config 00:02:17.324 proc-info: explicitly disabled via build config 00:02:17.324 test-acl: explicitly disabled via build config 00:02:17.324 test-bbdev: explicitly disabled via build config 00:02:17.324 test-cmdline: explicitly disabled via build config 00:02:17.324 test-compress-perf: explicitly disabled via build config 00:02:17.324 test-crypto-perf: explicitly disabled via build config 00:02:17.324 test-dma-perf: explicitly disabled via build config 00:02:17.324 test-eventdev: explicitly disabled via build config 00:02:17.324 test-fib: explicitly disabled via build config 00:02:17.324 test-flow-perf: explicitly disabled via build config 00:02:17.324 test-gpudev: explicitly disabled via build config 00:02:17.324 test-mldev: explicitly disabled via build config 00:02:17.324 test-pipeline: explicitly disabled via build config 00:02:17.324 test-pmd: explicitly disabled via build config 00:02:17.324 test-regex: explicitly disabled via build config 00:02:17.324 test-sad: explicitly disabled via build config 00:02:17.324 test-security-perf: explicitly disabled via build config 00:02:17.324 00:02:17.324 libs: 00:02:17.324 metrics: explicitly disabled via build config 00:02:17.324 acl: explicitly disabled via build config 00:02:17.324 bbdev: explicitly disabled via build config 00:02:17.324 bitratestats: explicitly disabled via build config 00:02:17.324 bpf: explicitly disabled via build config 00:02:17.324 cfgfile: explicitly disabled via build config 00:02:17.324 distributor: explicitly disabled via build config 00:02:17.324 efd: explicitly disabled via build config 00:02:17.324 eventdev: explicitly disabled via build config 00:02:17.324 dispatcher: explicitly disabled via build config 00:02:17.324 gpudev: explicitly disabled via build config 00:02:17.324 gro: explicitly disabled via build config 00:02:17.324 gso: explicitly disabled via build config 00:02:17.324 ip_frag: explicitly disabled via build config 00:02:17.324 jobstats: explicitly disabled via build config 00:02:17.324 latencystats: explicitly disabled via build config 00:02:17.324 lpm: explicitly disabled via build config 00:02:17.324 member: explicitly disabled via build config 00:02:17.324 pcapng: explicitly disabled via build config 00:02:17.324 rawdev: explicitly disabled via build config 00:02:17.324 regexdev: explicitly disabled via build config 00:02:17.324 mldev: explicitly disabled via build config 00:02:17.324 rib: explicitly disabled via build config 00:02:17.324 sched: explicitly disabled via build config 00:02:17.324 stack: explicitly disabled via build config 00:02:17.324 ipsec: explicitly disabled via build config 00:02:17.324 pdcp: explicitly disabled via build config 00:02:17.324 fib: explicitly disabled via build config 00:02:17.324 port: explicitly disabled via build config 00:02:17.324 pdump: explicitly disabled via build config 00:02:17.324 table: explicitly disabled via build config 00:02:17.324 pipeline: explicitly disabled via build config 00:02:17.324 graph: explicitly disabled via build config 00:02:17.324 node: explicitly disabled via build config 00:02:17.324 00:02:17.324 drivers: 00:02:17.324 common/cpt: not in enabled drivers build config 00:02:17.324 common/dpaax: not in enabled drivers build config 00:02:17.324 common/iavf: not in enabled drivers build config 00:02:17.324 common/idpf: not in enabled drivers build config 00:02:17.324 common/mvep: not in enabled drivers build config 00:02:17.324 common/octeontx: not in enabled drivers build config 00:02:17.324 bus/auxiliary: not in enabled drivers build config 00:02:17.324 bus/cdx: not in enabled drivers build config 00:02:17.324 bus/dpaa: not in enabled drivers build config 00:02:17.324 bus/fslmc: not in enabled drivers build config 00:02:17.324 bus/ifpga: not in enabled drivers build config 00:02:17.324 bus/platform: not in enabled drivers build config 00:02:17.324 bus/vmbus: not in enabled drivers build config 00:02:17.324 common/cnxk: not in enabled drivers build config 00:02:17.324 common/mlx5: not in enabled drivers build config 00:02:17.324 common/nfp: not in enabled drivers build config 00:02:17.324 common/qat: not in enabled drivers build config 00:02:17.324 common/sfc_efx: not in enabled drivers build config 00:02:17.324 mempool/bucket: not in enabled drivers build config 00:02:17.324 mempool/cnxk: not in enabled drivers build config 00:02:17.324 mempool/dpaa: not in enabled drivers build config 00:02:17.324 mempool/dpaa2: not in enabled drivers build config 00:02:17.324 mempool/octeontx: not in enabled drivers build config 00:02:17.324 mempool/stack: not in enabled drivers build config 00:02:17.324 dma/cnxk: not in enabled drivers build config 00:02:17.324 dma/dpaa: not in enabled drivers build config 00:02:17.324 dma/dpaa2: not in enabled drivers build config 00:02:17.324 dma/hisilicon: not in enabled drivers build config 00:02:17.324 dma/idxd: not in enabled drivers build config 00:02:17.324 dma/ioat: not in enabled drivers build config 00:02:17.324 dma/skeleton: not in enabled drivers build config 00:02:17.324 net/af_packet: not in enabled drivers build config 00:02:17.324 net/af_xdp: not in enabled drivers build config 00:02:17.324 net/ark: not in enabled drivers build config 00:02:17.324 net/atlantic: not in enabled drivers build config 00:02:17.324 net/avp: not in enabled drivers build config 00:02:17.324 net/axgbe: not in enabled drivers build config 00:02:17.324 net/bnx2x: not in enabled drivers build config 00:02:17.324 net/bnxt: not in enabled drivers build config 00:02:17.324 net/bonding: not in enabled drivers build config 00:02:17.324 net/cnxk: not in enabled drivers build config 00:02:17.324 net/cpfl: not in enabled drivers build config 00:02:17.324 net/cxgbe: not in enabled drivers build config 00:02:17.324 net/dpaa: not in enabled drivers build config 00:02:17.324 net/dpaa2: not in enabled drivers build config 00:02:17.324 net/e1000: not in enabled drivers build config 00:02:17.324 net/ena: not in enabled drivers build config 00:02:17.324 net/enetc: not in enabled drivers build config 00:02:17.324 net/enetfec: not in enabled drivers build config 00:02:17.324 net/enic: not in enabled drivers build config 00:02:17.324 net/failsafe: not in enabled drivers build config 00:02:17.324 net/fm10k: not in enabled drivers build config 00:02:17.324 net/gve: not in enabled drivers build config 00:02:17.324 net/hinic: not in enabled drivers build config 00:02:17.324 net/hns3: not in enabled drivers build config 00:02:17.324 net/i40e: not in enabled drivers build config 00:02:17.324 net/iavf: not in enabled drivers build config 00:02:17.324 net/ice: not in enabled drivers build config 00:02:17.324 net/idpf: not in enabled drivers build config 00:02:17.324 net/igc: not in enabled drivers build config 00:02:17.324 net/ionic: not in enabled drivers build config 00:02:17.324 net/ipn3ke: not in enabled drivers build config 00:02:17.324 net/ixgbe: not in enabled drivers build config 00:02:17.324 net/mana: not in enabled drivers build config 00:02:17.324 net/memif: not in enabled drivers build config 00:02:17.324 net/mlx4: not in enabled drivers build config 00:02:17.324 net/mlx5: not in enabled drivers build config 00:02:17.324 net/mvneta: not in enabled drivers build config 00:02:17.324 net/mvpp2: not in enabled drivers build config 00:02:17.324 net/netvsc: not in enabled drivers build config 00:02:17.324 net/nfb: not in enabled drivers build config 00:02:17.324 net/nfp: not in enabled drivers build config 00:02:17.324 net/ngbe: not in enabled drivers build config 00:02:17.324 net/null: not in enabled drivers build config 00:02:17.324 net/octeontx: not in enabled drivers build config 00:02:17.324 net/octeon_ep: not in enabled drivers build config 00:02:17.324 net/pcap: not in enabled drivers build config 00:02:17.324 net/pfe: not in enabled drivers build config 00:02:17.324 net/qede: not in enabled drivers build config 00:02:17.324 net/ring: not in enabled drivers build config 00:02:17.324 net/sfc: not in enabled drivers build config 00:02:17.324 net/softnic: not in enabled drivers build config 00:02:17.324 net/tap: not in enabled drivers build config 00:02:17.324 net/thunderx: not in enabled drivers build config 00:02:17.324 net/txgbe: not in enabled drivers build config 00:02:17.324 net/vdev_netvsc: not in enabled drivers build config 00:02:17.324 net/vhost: not in enabled drivers build config 00:02:17.324 net/virtio: not in enabled drivers build config 00:02:17.324 net/vmxnet3: not in enabled drivers build config 00:02:17.324 raw/*: missing internal dependency, "rawdev" 00:02:17.325 crypto/armv8: not in enabled drivers build config 00:02:17.325 crypto/bcmfs: not in enabled drivers build config 00:02:17.325 crypto/caam_jr: not in enabled drivers build config 00:02:17.325 crypto/ccp: not in enabled drivers build config 00:02:17.325 crypto/cnxk: not in enabled drivers build config 00:02:17.325 crypto/dpaa_sec: not in enabled drivers build config 00:02:17.325 crypto/dpaa2_sec: not in enabled drivers build config 00:02:17.325 crypto/ipsec_mb: not in enabled drivers build config 00:02:17.325 crypto/mlx5: not in enabled drivers build config 00:02:17.325 crypto/mvsam: not in enabled drivers build config 00:02:17.325 crypto/nitrox: not in enabled drivers build config 00:02:17.325 crypto/null: not in enabled drivers build config 00:02:17.325 crypto/octeontx: not in enabled drivers build config 00:02:17.325 crypto/openssl: not in enabled drivers build config 00:02:17.325 crypto/scheduler: not in enabled drivers build config 00:02:17.325 crypto/uadk: not in enabled drivers build config 00:02:17.325 crypto/virtio: not in enabled drivers build config 00:02:17.325 compress/isal: not in enabled drivers build config 00:02:17.325 compress/mlx5: not in enabled drivers build config 00:02:17.325 compress/octeontx: not in enabled drivers build config 00:02:17.325 compress/zlib: not in enabled drivers build config 00:02:17.325 regex/*: missing internal dependency, "regexdev" 00:02:17.325 ml/*: missing internal dependency, "mldev" 00:02:17.325 vdpa/ifc: not in enabled drivers build config 00:02:17.325 vdpa/mlx5: not in enabled drivers build config 00:02:17.325 vdpa/nfp: not in enabled drivers build config 00:02:17.325 vdpa/sfc: not in enabled drivers build config 00:02:17.325 event/*: missing internal dependency, "eventdev" 00:02:17.325 baseband/*: missing internal dependency, "bbdev" 00:02:17.325 gpu/*: missing internal dependency, "gpudev" 00:02:17.325 00:02:17.325 00:02:17.325 Build targets in project: 85 00:02:17.325 00:02:17.325 DPDK 23.11.0 00:02:17.325 00:02:17.325 User defined options 00:02:17.325 buildtype : debug 00:02:17.325 default_library : shared 00:02:17.325 libdir : lib 00:02:17.325 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:17.325 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:17.325 c_link_args : 00:02:17.325 cpu_instruction_set: native 00:02:17.325 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:17.325 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:17.325 enable_docs : false 00:02:17.325 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:17.325 enable_kmods : false 00:02:17.325 tests : false 00:02:17.325 00:02:17.325 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.890 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:17.890 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:17.890 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:17.890 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.890 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:17.890 [5/265] Linking static target lib/librte_kvargs.a 00:02:17.890 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:17.890 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.890 [8/265] Linking static target lib/librte_log.a 00:02:17.890 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:18.159 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:18.440 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.697 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:18.697 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:18.697 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:18.956 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:18.956 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:18.956 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:18.956 [18/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.956 [19/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.956 [20/265] Linking static target lib/librte_telemetry.a 00:02:18.956 [21/265] Linking target lib/librte_log.so.24.0 00:02:18.956 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:19.214 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:19.214 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:19.214 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:19.214 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:19.473 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:19.473 [28/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:19.473 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:19.731 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:19.731 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:19.731 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:19.731 [33/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.990 [34/265] Linking target lib/librte_telemetry.so.24.0 00:02:19.990 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:19.990 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:19.990 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:20.249 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:20.249 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:20.249 [40/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:20.249 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:20.249 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:20.249 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:20.249 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:20.249 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:20.507 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:20.766 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:21.024 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:21.024 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:21.024 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:21.024 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:21.283 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:21.283 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:21.283 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:21.283 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:21.283 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:21.283 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:21.542 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:21.542 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:21.542 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:21.542 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:21.800 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:21.800 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:22.058 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:22.058 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:22.058 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:22.058 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:22.315 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:22.315 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:22.573 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:22.573 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:22.573 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:22.573 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:22.573 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:22.573 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:22.573 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:22.831 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:22.831 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:22.831 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:23.090 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:23.090 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:23.348 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:23.348 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:23.606 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:23.606 [85/265] Linking static target lib/librte_eal.a 00:02:23.606 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:23.606 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:23.606 [88/265] Linking static target lib/librte_ring.a 00:02:23.606 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:23.864 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:23.864 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:23.864 [92/265] Linking static target lib/librte_rcu.a 00:02:24.121 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.121 [94/265] Linking static target lib/librte_mempool.a 00:02:24.121 [95/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.122 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.379 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.379 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:24.379 [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:24.379 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.637 [101/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.637 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:24.637 [103/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.637 [104/265] Linking static target lib/librte_mbuf.a 00:02:25.203 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:25.203 [106/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:25.203 [107/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:25.204 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:25.204 [109/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:25.204 [110/265] Linking static target lib/librte_meter.a 00:02:25.204 [111/265] Linking static target lib/librte_net.a 00:02:25.463 [112/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.721 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:25.721 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:25.721 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.980 [116/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.980 [117/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.268 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:26.269 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:26.835 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:26.835 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:27.093 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:27.093 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:27.093 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:27.351 [125/265] Linking static target lib/librte_pci.a 00:02:27.351 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:27.351 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:27.351 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:27.351 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:27.609 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:27.609 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:27.609 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.609 [133/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.609 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:27.867 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:27.867 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:27.867 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:27.867 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:27.867 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:27.867 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.867 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:27.867 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:28.125 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:28.383 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:28.383 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:28.383 [146/265] Linking static target lib/librte_cmdline.a 00:02:28.641 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:28.641 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:28.641 [149/265] Linking static target lib/librte_ethdev.a 00:02:28.641 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:28.641 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:28.641 [152/265] Linking static target lib/librte_timer.a 00:02:28.900 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:29.159 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.159 [155/265] Linking static target lib/librte_hash.a 00:02:29.159 [156/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:29.417 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:29.417 [158/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:29.417 [159/265] Linking static target lib/librte_compressdev.a 00:02:29.417 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.417 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:29.675 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:29.933 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:29.933 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:29.933 [165/265] Linking static target lib/librte_dmadev.a 00:02:29.933 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:30.190 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:30.190 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:30.190 [169/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:30.190 [170/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:30.190 [171/265] Linking static target lib/librte_cryptodev.a 00:02:30.447 [172/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.447 [173/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.447 [174/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.706 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:30.706 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.706 [177/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:30.964 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:30.964 [179/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:30.964 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:30.964 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:31.222 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:31.222 [183/265] Linking static target lib/librte_power.a 00:02:31.222 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:31.479 [185/265] Linking static target lib/librte_reorder.a 00:02:31.736 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:31.736 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:31.995 [188/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:31.995 [189/265] Linking static target lib/librte_security.a 00:02:31.995 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:31.995 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.995 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:32.560 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.560 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:32.819 [195/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.819 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:32.819 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:32.819 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.819 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:33.077 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:33.077 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:33.336 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:33.336 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:33.336 [204/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:33.595 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:33.595 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:33.595 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:33.595 [208/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:33.595 [209/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:33.871 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:33.871 [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:33.871 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:33.871 [213/265] Linking static target drivers/librte_bus_vdev.a 00:02:33.871 [214/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:33.871 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:33.871 [216/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:33.871 [217/265] Linking static target drivers/librte_bus_pci.a 00:02:33.871 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:33.871 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:34.137 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.137 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:34.137 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:34.137 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:34.137 [224/265] Linking static target drivers/librte_mempool_ring.a 00:02:34.396 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.961 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:34.961 [227/265] Linking static target lib/librte_vhost.a 00:02:35.896 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.896 [229/265] Linking target lib/librte_eal.so.24.0 00:02:36.153 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:36.153 [231/265] Linking target lib/librte_timer.so.24.0 00:02:36.153 [232/265] Linking target lib/librte_dmadev.so.24.0 00:02:36.153 [233/265] Linking target lib/librte_pci.so.24.0 00:02:36.153 [234/265] Linking target lib/librte_ring.so.24.0 00:02:36.153 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:36.153 [236/265] Linking target lib/librte_meter.so.24.0 00:02:36.153 [237/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:36.153 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:36.153 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:36.153 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:36.153 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:36.411 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:36.411 [243/265] Linking target lib/librte_mempool.so.24.0 00:02:36.411 [244/265] Linking target lib/librte_rcu.so.24.0 00:02:36.411 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:36.411 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:36.411 [247/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.411 [248/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.411 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:36.411 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:36.668 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:36.669 [252/265] Linking target lib/librte_compressdev.so.24.0 00:02:36.669 [253/265] Linking target lib/librte_reorder.so.24.0 00:02:36.669 [254/265] Linking target lib/librte_net.so.24.0 00:02:36.669 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:36.927 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:36.927 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:36.927 [258/265] Linking target lib/librte_hash.so.24.0 00:02:36.927 [259/265] Linking target lib/librte_security.so.24.0 00:02:36.927 [260/265] Linking target lib/librte_cmdline.so.24.0 00:02:36.927 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:37.184 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:37.184 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:37.184 [264/265] Linking target lib/librte_power.so.24.0 00:02:37.184 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:37.184 INFO: autodetecting backend as ninja 00:02:37.184 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:39.083 CC lib/log/log.o 00:02:39.083 CC lib/ut_mock/mock.o 00:02:39.083 CC lib/log/log_deprecated.o 00:02:39.083 CC lib/log/log_flags.o 00:02:39.083 CC lib/ut/ut.o 00:02:39.083 LIB libspdk_ut_mock.a 00:02:39.083 LIB libspdk_log.a 00:02:39.083 LIB libspdk_ut.a 00:02:39.083 SO libspdk_ut_mock.so.5.0 00:02:39.083 SO libspdk_ut.so.1.0 00:02:39.083 SO libspdk_log.so.6.1 00:02:39.340 SYMLINK libspdk_ut_mock.so 00:02:39.340 SYMLINK libspdk_ut.so 00:02:39.340 SYMLINK libspdk_log.so 00:02:39.340 CC lib/util/bit_array.o 00:02:39.340 CC lib/util/base64.o 00:02:39.340 CC lib/ioat/ioat.o 00:02:39.340 CC lib/util/crc16.o 00:02:39.340 CC lib/dma/dma.o 00:02:39.340 CC lib/util/cpuset.o 00:02:39.340 CC lib/util/crc32.o 00:02:39.340 CC lib/util/crc32c.o 00:02:39.340 CXX lib/trace_parser/trace.o 00:02:39.598 CC lib/vfio_user/host/vfio_user_pci.o 00:02:39.598 CC lib/vfio_user/host/vfio_user.o 00:02:39.598 CC lib/util/crc32_ieee.o 00:02:39.598 CC lib/util/crc64.o 00:02:39.598 CC lib/util/dif.o 00:02:39.598 LIB libspdk_dma.a 00:02:39.598 CC lib/util/fd.o 00:02:39.598 SO libspdk_dma.so.3.0 00:02:39.598 CC lib/util/file.o 00:02:39.856 CC lib/util/hexlify.o 00:02:39.856 LIB libspdk_ioat.a 00:02:39.856 SYMLINK libspdk_dma.so 00:02:39.856 CC lib/util/iov.o 00:02:39.856 CC lib/util/math.o 00:02:39.856 SO libspdk_ioat.so.6.0 00:02:39.856 CC lib/util/pipe.o 00:02:39.856 CC lib/util/strerror_tls.o 00:02:39.856 LIB libspdk_vfio_user.a 00:02:39.856 CC lib/util/string.o 00:02:39.856 SO libspdk_vfio_user.so.4.0 00:02:39.856 SYMLINK libspdk_ioat.so 00:02:39.856 CC lib/util/uuid.o 00:02:39.856 SYMLINK libspdk_vfio_user.so 00:02:39.856 CC lib/util/fd_group.o 00:02:39.856 CC lib/util/xor.o 00:02:39.856 CC lib/util/zipf.o 00:02:40.113 LIB libspdk_util.a 00:02:40.370 SO libspdk_util.so.8.0 00:02:40.370 LIB libspdk_trace_parser.a 00:02:40.370 SYMLINK libspdk_util.so 00:02:40.370 SO libspdk_trace_parser.so.4.0 00:02:40.628 CC lib/rdma/common.o 00:02:40.628 CC lib/json/json_parse.o 00:02:40.628 CC lib/conf/conf.o 00:02:40.628 CC lib/rdma/rdma_verbs.o 00:02:40.628 CC lib/env_dpdk/env.o 00:02:40.628 CC lib/json/json_util.o 00:02:40.628 CC lib/env_dpdk/memory.o 00:02:40.628 CC lib/vmd/vmd.o 00:02:40.628 CC lib/idxd/idxd.o 00:02:40.628 SYMLINK libspdk_trace_parser.so 00:02:40.628 CC lib/idxd/idxd_user.o 00:02:40.885 CC lib/idxd/idxd_kernel.o 00:02:40.885 LIB libspdk_conf.a 00:02:40.885 CC lib/vmd/led.o 00:02:40.885 CC lib/json/json_write.o 00:02:40.885 SO libspdk_conf.so.5.0 00:02:40.885 CC lib/env_dpdk/pci.o 00:02:40.885 LIB libspdk_rdma.a 00:02:40.885 SYMLINK libspdk_conf.so 00:02:40.885 CC lib/env_dpdk/init.o 00:02:40.885 SO libspdk_rdma.so.5.0 00:02:40.885 CC lib/env_dpdk/threads.o 00:02:40.885 CC lib/env_dpdk/pci_ioat.o 00:02:40.885 SYMLINK libspdk_rdma.so 00:02:40.885 CC lib/env_dpdk/pci_virtio.o 00:02:41.143 LIB libspdk_json.a 00:02:41.143 CC lib/env_dpdk/pci_vmd.o 00:02:41.143 CC lib/env_dpdk/pci_idxd.o 00:02:41.143 LIB libspdk_idxd.a 00:02:41.143 CC lib/env_dpdk/pci_event.o 00:02:41.143 SO libspdk_json.so.5.1 00:02:41.143 SO libspdk_idxd.so.11.0 00:02:41.143 LIB libspdk_vmd.a 00:02:41.143 CC lib/env_dpdk/sigbus_handler.o 00:02:41.143 SYMLINK libspdk_json.so 00:02:41.143 CC lib/env_dpdk/pci_dpdk.o 00:02:41.143 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:41.143 SYMLINK libspdk_idxd.so 00:02:41.143 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:41.143 SO libspdk_vmd.so.5.0 00:02:41.401 SYMLINK libspdk_vmd.so 00:02:41.401 CC lib/jsonrpc/jsonrpc_server.o 00:02:41.401 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:41.401 CC lib/jsonrpc/jsonrpc_client.o 00:02:41.401 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:41.660 LIB libspdk_jsonrpc.a 00:02:41.660 SO libspdk_jsonrpc.so.5.1 00:02:41.660 SYMLINK libspdk_jsonrpc.so 00:02:41.929 LIB libspdk_env_dpdk.a 00:02:41.929 CC lib/rpc/rpc.o 00:02:41.929 SO libspdk_env_dpdk.so.13.0 00:02:41.929 LIB libspdk_rpc.a 00:02:42.193 SO libspdk_rpc.so.5.0 00:02:42.193 SYMLINK libspdk_env_dpdk.so 00:02:42.193 SYMLINK libspdk_rpc.so 00:02:42.193 CC lib/notify/notify.o 00:02:42.193 CC lib/notify/notify_rpc.o 00:02:42.193 CC lib/sock/sock.o 00:02:42.193 CC lib/sock/sock_rpc.o 00:02:42.193 CC lib/trace/trace.o 00:02:42.193 CC lib/trace/trace_flags.o 00:02:42.193 CC lib/trace/trace_rpc.o 00:02:42.451 LIB libspdk_notify.a 00:02:42.451 SO libspdk_notify.so.5.0 00:02:42.451 LIB libspdk_trace.a 00:02:42.451 SO libspdk_trace.so.9.0 00:02:42.451 SYMLINK libspdk_notify.so 00:02:42.709 SYMLINK libspdk_trace.so 00:02:42.709 LIB libspdk_sock.a 00:02:42.709 SO libspdk_sock.so.8.0 00:02:42.709 CC lib/thread/thread.o 00:02:42.709 CC lib/thread/iobuf.o 00:02:42.968 SYMLINK libspdk_sock.so 00:02:42.968 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:42.968 CC lib/nvme/nvme_ctrlr.o 00:02:42.968 CC lib/nvme/nvme_fabric.o 00:02:42.968 CC lib/nvme/nvme_ns.o 00:02:42.968 CC lib/nvme/nvme_ns_cmd.o 00:02:42.968 CC lib/nvme/nvme_pcie_common.o 00:02:42.968 CC lib/nvme/nvme_qpair.o 00:02:42.968 CC lib/nvme/nvme_pcie.o 00:02:43.226 CC lib/nvme/nvme.o 00:02:43.793 CC lib/nvme/nvme_quirks.o 00:02:43.793 CC lib/nvme/nvme_transport.o 00:02:43.793 CC lib/nvme/nvme_discovery.o 00:02:43.793 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:44.051 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:44.051 CC lib/nvme/nvme_tcp.o 00:02:44.051 CC lib/nvme/nvme_opal.o 00:02:44.051 CC lib/nvme/nvme_io_msg.o 00:02:44.309 LIB libspdk_thread.a 00:02:44.309 CC lib/nvme/nvme_poll_group.o 00:02:44.309 SO libspdk_thread.so.9.0 00:02:44.309 SYMLINK libspdk_thread.so 00:02:44.568 CC lib/accel/accel.o 00:02:44.568 CC lib/nvme/nvme_zns.o 00:02:44.568 CC lib/nvme/nvme_cuse.o 00:02:44.568 CC lib/blob/blobstore.o 00:02:44.568 CC lib/init/json_config.o 00:02:44.568 CC lib/init/subsystem.o 00:02:44.827 CC lib/accel/accel_rpc.o 00:02:44.827 CC lib/accel/accel_sw.o 00:02:44.827 CC lib/init/subsystem_rpc.o 00:02:44.827 CC lib/nvme/nvme_vfio_user.o 00:02:44.827 CC lib/init/rpc.o 00:02:44.827 CC lib/nvme/nvme_rdma.o 00:02:45.086 CC lib/blob/request.o 00:02:45.086 LIB libspdk_init.a 00:02:45.086 SO libspdk_init.so.4.0 00:02:45.086 CC lib/blob/zeroes.o 00:02:45.086 SYMLINK libspdk_init.so 00:02:45.086 CC lib/virtio/virtio.o 00:02:45.344 CC lib/vfu_tgt/tgt_endpoint.o 00:02:45.345 CC lib/blob/blob_bs_dev.o 00:02:45.345 CC lib/virtio/virtio_vhost_user.o 00:02:45.345 CC lib/virtio/virtio_vfio_user.o 00:02:45.345 LIB libspdk_accel.a 00:02:45.345 CC lib/event/app.o 00:02:45.345 SO libspdk_accel.so.14.0 00:02:45.345 CC lib/event/reactor.o 00:02:45.603 CC lib/event/log_rpc.o 00:02:45.603 SYMLINK libspdk_accel.so 00:02:45.603 CC lib/event/app_rpc.o 00:02:45.603 CC lib/event/scheduler_static.o 00:02:45.603 CC lib/vfu_tgt/tgt_rpc.o 00:02:45.603 CC lib/virtio/virtio_pci.o 00:02:45.861 LIB libspdk_vfu_tgt.a 00:02:45.861 CC lib/bdev/bdev.o 00:02:45.862 CC lib/bdev/bdev_rpc.o 00:02:45.862 CC lib/bdev/bdev_zone.o 00:02:45.862 CC lib/bdev/part.o 00:02:45.862 CC lib/bdev/scsi_nvme.o 00:02:45.862 SO libspdk_vfu_tgt.so.2.0 00:02:45.862 LIB libspdk_event.a 00:02:45.862 SYMLINK libspdk_vfu_tgt.so 00:02:45.862 SO libspdk_event.so.12.0 00:02:45.862 LIB libspdk_virtio.a 00:02:45.862 SO libspdk_virtio.so.6.0 00:02:45.862 SYMLINK libspdk_event.so 00:02:46.120 SYMLINK libspdk_virtio.so 00:02:46.120 LIB libspdk_nvme.a 00:02:46.378 SO libspdk_nvme.so.12.0 00:02:46.637 SYMLINK libspdk_nvme.so 00:02:47.204 LIB libspdk_blob.a 00:02:47.204 SO libspdk_blob.so.10.1 00:02:47.204 SYMLINK libspdk_blob.so 00:02:47.462 CC lib/blobfs/blobfs.o 00:02:47.462 CC lib/blobfs/tree.o 00:02:47.462 CC lib/lvol/lvol.o 00:02:48.029 LIB libspdk_bdev.a 00:02:48.287 SO libspdk_bdev.so.14.0 00:02:48.288 LIB libspdk_blobfs.a 00:02:48.288 SO libspdk_blobfs.so.9.0 00:02:48.288 SYMLINK libspdk_bdev.so 00:02:48.288 LIB libspdk_lvol.a 00:02:48.288 SO libspdk_lvol.so.9.1 00:02:48.288 SYMLINK libspdk_blobfs.so 00:02:48.545 SYMLINK libspdk_lvol.so 00:02:48.546 CC lib/ftl/ftl_core.o 00:02:48.546 CC lib/nbd/nbd.o 00:02:48.546 CC lib/nbd/nbd_rpc.o 00:02:48.546 CC lib/ftl/ftl_init.o 00:02:48.546 CC lib/ftl/ftl_layout.o 00:02:48.546 CC lib/nvmf/ctrlr.o 00:02:48.546 CC lib/ftl/ftl_io.o 00:02:48.546 CC lib/ftl/ftl_debug.o 00:02:48.546 CC lib/scsi/dev.o 00:02:48.546 CC lib/ublk/ublk.o 00:02:48.546 CC lib/ublk/ublk_rpc.o 00:02:48.804 CC lib/scsi/lun.o 00:02:48.804 CC lib/nvmf/ctrlr_discovery.o 00:02:48.804 CC lib/nvmf/ctrlr_bdev.o 00:02:48.804 CC lib/scsi/port.o 00:02:48.804 CC lib/nvmf/subsystem.o 00:02:48.804 CC lib/scsi/scsi.o 00:02:48.804 CC lib/ftl/ftl_sb.o 00:02:48.804 CC lib/ftl/ftl_l2p.o 00:02:48.804 LIB libspdk_nbd.a 00:02:49.062 SO libspdk_nbd.so.6.0 00:02:49.062 CC lib/nvmf/nvmf.o 00:02:49.062 SYMLINK libspdk_nbd.so 00:02:49.062 CC lib/scsi/scsi_bdev.o 00:02:49.062 CC lib/ftl/ftl_l2p_flat.o 00:02:49.062 LIB libspdk_ublk.a 00:02:49.062 CC lib/scsi/scsi_pr.o 00:02:49.062 SO libspdk_ublk.so.2.0 00:02:49.062 CC lib/nvmf/nvmf_rpc.o 00:02:49.062 CC lib/scsi/scsi_rpc.o 00:02:49.062 SYMLINK libspdk_ublk.so 00:02:49.062 CC lib/nvmf/transport.o 00:02:49.321 CC lib/ftl/ftl_nv_cache.o 00:02:49.321 CC lib/scsi/task.o 00:02:49.321 CC lib/nvmf/tcp.o 00:02:49.321 CC lib/nvmf/vfio_user.o 00:02:49.581 CC lib/ftl/ftl_band.o 00:02:49.581 LIB libspdk_scsi.a 00:02:49.581 SO libspdk_scsi.so.8.0 00:02:49.581 SYMLINK libspdk_scsi.so 00:02:49.581 CC lib/ftl/ftl_band_ops.o 00:02:49.838 CC lib/nvmf/rdma.o 00:02:49.838 CC lib/ftl/ftl_writer.o 00:02:49.838 CC lib/ftl/ftl_rq.o 00:02:49.838 CC lib/iscsi/conn.o 00:02:49.838 CC lib/vhost/vhost.o 00:02:49.839 CC lib/iscsi/init_grp.o 00:02:50.096 CC lib/ftl/ftl_reloc.o 00:02:50.096 CC lib/vhost/vhost_rpc.o 00:02:50.096 CC lib/iscsi/iscsi.o 00:02:50.096 CC lib/iscsi/md5.o 00:02:50.096 CC lib/iscsi/param.o 00:02:50.354 CC lib/iscsi/portal_grp.o 00:02:50.354 CC lib/ftl/ftl_l2p_cache.o 00:02:50.354 CC lib/ftl/ftl_p2l.o 00:02:50.613 CC lib/iscsi/tgt_node.o 00:02:50.613 CC lib/vhost/vhost_scsi.o 00:02:50.613 CC lib/vhost/vhost_blk.o 00:02:50.613 CC lib/vhost/rte_vhost_user.o 00:02:50.613 CC lib/ftl/mngt/ftl_mngt.o 00:02:50.872 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:50.872 CC lib/iscsi/iscsi_subsystem.o 00:02:50.872 CC lib/iscsi/iscsi_rpc.o 00:02:50.872 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:50.872 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:51.131 CC lib/iscsi/task.o 00:02:51.131 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:51.131 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:51.131 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:51.389 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:51.389 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:51.389 LIB libspdk_iscsi.a 00:02:51.389 SO libspdk_iscsi.so.7.0 00:02:51.389 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:51.389 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:51.389 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:51.389 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:51.389 CC lib/ftl/utils/ftl_conf.o 00:02:51.648 CC lib/ftl/utils/ftl_md.o 00:02:51.648 SYMLINK libspdk_iscsi.so 00:02:51.648 CC lib/ftl/utils/ftl_mempool.o 00:02:51.648 CC lib/ftl/utils/ftl_bitmap.o 00:02:51.648 CC lib/ftl/utils/ftl_property.o 00:02:51.648 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:51.648 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:51.648 LIB libspdk_vhost.a 00:02:51.648 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:51.648 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:51.648 SO libspdk_vhost.so.7.1 00:02:51.648 LIB libspdk_nvmf.a 00:02:51.907 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:51.907 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:51.907 SYMLINK libspdk_vhost.so 00:02:51.907 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:51.907 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:51.907 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:51.907 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:51.907 SO libspdk_nvmf.so.17.0 00:02:51.907 CC lib/ftl/base/ftl_base_dev.o 00:02:51.907 CC lib/ftl/base/ftl_base_bdev.o 00:02:51.907 CC lib/ftl/ftl_trace.o 00:02:52.165 SYMLINK libspdk_nvmf.so 00:02:52.165 LIB libspdk_ftl.a 00:02:52.424 SO libspdk_ftl.so.8.0 00:02:52.688 SYMLINK libspdk_ftl.so 00:02:52.982 CC module/env_dpdk/env_dpdk_rpc.o 00:02:52.982 CC module/vfu_device/vfu_virtio.o 00:02:52.982 CC module/accel/error/accel_error.o 00:02:52.982 CC module/accel/dsa/accel_dsa.o 00:02:52.982 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:52.982 CC module/accel/ioat/accel_ioat.o 00:02:52.982 CC module/blob/bdev/blob_bdev.o 00:02:52.982 CC module/sock/posix/posix.o 00:02:52.982 CC module/accel/iaa/accel_iaa.o 00:02:52.982 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:52.982 LIB libspdk_env_dpdk_rpc.a 00:02:53.256 SO libspdk_env_dpdk_rpc.so.5.0 00:02:53.256 LIB libspdk_scheduler_dpdk_governor.a 00:02:53.256 SYMLINK libspdk_env_dpdk_rpc.so 00:02:53.256 CC module/accel/error/accel_error_rpc.o 00:02:53.256 CC module/vfu_device/vfu_virtio_blk.o 00:02:53.256 CC module/accel/ioat/accel_ioat_rpc.o 00:02:53.256 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:53.256 LIB libspdk_scheduler_dynamic.a 00:02:53.256 CC module/accel/dsa/accel_dsa_rpc.o 00:02:53.256 CC module/accel/iaa/accel_iaa_rpc.o 00:02:53.256 SO libspdk_scheduler_dynamic.so.3.0 00:02:53.256 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:53.256 LIB libspdk_blob_bdev.a 00:02:53.256 SO libspdk_blob_bdev.so.10.1 00:02:53.256 LIB libspdk_accel_error.a 00:02:53.256 LIB libspdk_accel_ioat.a 00:02:53.256 SYMLINK libspdk_scheduler_dynamic.so 00:02:53.256 CC module/vfu_device/vfu_virtio_scsi.o 00:02:53.256 SO libspdk_accel_error.so.1.0 00:02:53.256 SO libspdk_accel_ioat.so.5.0 00:02:53.256 SYMLINK libspdk_blob_bdev.so 00:02:53.514 LIB libspdk_accel_dsa.a 00:02:53.514 LIB libspdk_accel_iaa.a 00:02:53.514 CC module/scheduler/gscheduler/gscheduler.o 00:02:53.514 CC module/vfu_device/vfu_virtio_rpc.o 00:02:53.514 SO libspdk_accel_dsa.so.4.0 00:02:53.514 SO libspdk_accel_iaa.so.2.0 00:02:53.514 SYMLINK libspdk_accel_ioat.so 00:02:53.514 SYMLINK libspdk_accel_error.so 00:02:53.514 SYMLINK libspdk_accel_dsa.so 00:02:53.514 SYMLINK libspdk_accel_iaa.so 00:02:53.514 LIB libspdk_scheduler_gscheduler.a 00:02:53.514 SO libspdk_scheduler_gscheduler.so.3.0 00:02:53.514 CC module/bdev/error/vbdev_error.o 00:02:53.514 CC module/bdev/lvol/vbdev_lvol.o 00:02:53.514 CC module/bdev/delay/vbdev_delay.o 00:02:53.514 CC module/bdev/gpt/gpt.o 00:02:53.773 CC module/blobfs/bdev/blobfs_bdev.o 00:02:53.773 SYMLINK libspdk_scheduler_gscheduler.so 00:02:53.773 LIB libspdk_vfu_device.a 00:02:53.773 LIB libspdk_sock_posix.a 00:02:53.773 CC module/bdev/error/vbdev_error_rpc.o 00:02:53.773 CC module/bdev/malloc/bdev_malloc.o 00:02:53.773 SO libspdk_sock_posix.so.5.0 00:02:53.773 CC module/bdev/null/bdev_null.o 00:02:53.773 SO libspdk_vfu_device.so.2.0 00:02:53.773 SYMLINK libspdk_sock_posix.so 00:02:53.773 CC module/bdev/gpt/vbdev_gpt.o 00:02:53.773 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:53.773 SYMLINK libspdk_vfu_device.so 00:02:53.773 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:53.773 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:53.773 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:53.773 LIB libspdk_bdev_error.a 00:02:54.032 SO libspdk_bdev_error.so.5.0 00:02:54.032 SYMLINK libspdk_bdev_error.so 00:02:54.032 CC module/bdev/null/bdev_null_rpc.o 00:02:54.032 LIB libspdk_bdev_delay.a 00:02:54.032 LIB libspdk_blobfs_bdev.a 00:02:54.032 SO libspdk_bdev_delay.so.5.0 00:02:54.032 SO libspdk_blobfs_bdev.so.5.0 00:02:54.032 LIB libspdk_bdev_malloc.a 00:02:54.032 CC module/bdev/nvme/bdev_nvme.o 00:02:54.032 LIB libspdk_bdev_gpt.a 00:02:54.032 CC module/bdev/passthru/vbdev_passthru.o 00:02:54.032 SO libspdk_bdev_malloc.so.5.0 00:02:54.032 SO libspdk_bdev_gpt.so.5.0 00:02:54.032 SYMLINK libspdk_bdev_delay.so 00:02:54.032 LIB libspdk_bdev_lvol.a 00:02:54.032 SYMLINK libspdk_blobfs_bdev.so 00:02:54.032 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:54.291 CC module/bdev/nvme/nvme_rpc.o 00:02:54.291 CC module/bdev/raid/bdev_raid.o 00:02:54.291 LIB libspdk_bdev_null.a 00:02:54.291 SYMLINK libspdk_bdev_malloc.so 00:02:54.291 SO libspdk_bdev_lvol.so.5.0 00:02:54.291 CC module/bdev/nvme/bdev_mdns_client.o 00:02:54.291 SYMLINK libspdk_bdev_gpt.so 00:02:54.291 CC module/bdev/nvme/vbdev_opal.o 00:02:54.291 SO libspdk_bdev_null.so.5.0 00:02:54.291 CC module/bdev/split/vbdev_split.o 00:02:54.291 SYMLINK libspdk_bdev_lvol.so 00:02:54.291 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:54.291 SYMLINK libspdk_bdev_null.so 00:02:54.291 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:54.291 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:54.291 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:54.550 CC module/bdev/split/vbdev_split_rpc.o 00:02:54.550 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:54.550 LIB libspdk_bdev_passthru.a 00:02:54.550 CC module/bdev/aio/bdev_aio.o 00:02:54.550 CC module/bdev/ftl/bdev_ftl.o 00:02:54.550 SO libspdk_bdev_passthru.so.5.0 00:02:54.550 LIB libspdk_bdev_split.a 00:02:54.550 SYMLINK libspdk_bdev_passthru.so 00:02:54.550 SO libspdk_bdev_split.so.5.0 00:02:54.550 CC module/bdev/raid/bdev_raid_rpc.o 00:02:54.808 CC module/bdev/iscsi/bdev_iscsi.o 00:02:54.808 LIB libspdk_bdev_zone_block.a 00:02:54.808 SYMLINK libspdk_bdev_split.so 00:02:54.808 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:54.808 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:54.808 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:54.808 SO libspdk_bdev_zone_block.so.5.0 00:02:54.808 SYMLINK libspdk_bdev_zone_block.so 00:02:54.808 CC module/bdev/aio/bdev_aio_rpc.o 00:02:54.808 CC module/bdev/raid/bdev_raid_sb.o 00:02:54.808 CC module/bdev/raid/raid0.o 00:02:54.808 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:54.808 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:55.066 LIB libspdk_bdev_ftl.a 00:02:55.066 SO libspdk_bdev_ftl.so.5.0 00:02:55.066 LIB libspdk_bdev_aio.a 00:02:55.066 CC module/bdev/raid/raid1.o 00:02:55.066 SO libspdk_bdev_aio.so.5.0 00:02:55.066 LIB libspdk_bdev_iscsi.a 00:02:55.066 SYMLINK libspdk_bdev_ftl.so 00:02:55.066 SO libspdk_bdev_iscsi.so.5.0 00:02:55.066 CC module/bdev/raid/concat.o 00:02:55.066 SYMLINK libspdk_bdev_aio.so 00:02:55.066 SYMLINK libspdk_bdev_iscsi.so 00:02:55.324 LIB libspdk_bdev_virtio.a 00:02:55.324 LIB libspdk_bdev_raid.a 00:02:55.324 SO libspdk_bdev_virtio.so.5.0 00:02:55.324 SO libspdk_bdev_raid.so.5.0 00:02:55.324 SYMLINK libspdk_bdev_virtio.so 00:02:55.583 SYMLINK libspdk_bdev_raid.so 00:02:56.150 LIB libspdk_bdev_nvme.a 00:02:56.150 SO libspdk_bdev_nvme.so.6.0 00:02:56.408 SYMLINK libspdk_bdev_nvme.so 00:02:56.666 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:56.666 CC module/event/subsystems/vmd/vmd.o 00:02:56.666 CC module/event/subsystems/sock/sock.o 00:02:56.666 CC module/event/subsystems/iobuf/iobuf.o 00:02:56.666 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:56.666 CC module/event/subsystems/scheduler/scheduler.o 00:02:56.666 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:56.666 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:56.666 LIB libspdk_event_vhost_blk.a 00:02:56.666 LIB libspdk_event_scheduler.a 00:02:56.666 LIB libspdk_event_sock.a 00:02:56.666 LIB libspdk_event_vfu_tgt.a 00:02:56.666 SO libspdk_event_scheduler.so.3.0 00:02:56.666 SO libspdk_event_sock.so.4.0 00:02:56.666 LIB libspdk_event_vmd.a 00:02:56.925 SO libspdk_event_vhost_blk.so.2.0 00:02:56.925 LIB libspdk_event_iobuf.a 00:02:56.925 SO libspdk_event_vfu_tgt.so.2.0 00:02:56.925 SO libspdk_event_vmd.so.5.0 00:02:56.925 SYMLINK libspdk_event_vhost_blk.so 00:02:56.925 SYMLINK libspdk_event_scheduler.so 00:02:56.925 SYMLINK libspdk_event_sock.so 00:02:56.925 SO libspdk_event_iobuf.so.2.0 00:02:56.925 SYMLINK libspdk_event_vfu_tgt.so 00:02:56.925 SYMLINK libspdk_event_vmd.so 00:02:56.925 SYMLINK libspdk_event_iobuf.so 00:02:57.183 CC module/event/subsystems/accel/accel.o 00:02:57.183 LIB libspdk_event_accel.a 00:02:57.442 SO libspdk_event_accel.so.5.0 00:02:57.442 SYMLINK libspdk_event_accel.so 00:02:57.442 CC module/event/subsystems/bdev/bdev.o 00:02:57.701 LIB libspdk_event_bdev.a 00:02:57.701 SO libspdk_event_bdev.so.5.0 00:02:57.959 SYMLINK libspdk_event_bdev.so 00:02:57.959 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:57.959 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:57.959 CC module/event/subsystems/nbd/nbd.o 00:02:57.959 CC module/event/subsystems/ublk/ublk.o 00:02:57.959 CC module/event/subsystems/scsi/scsi.o 00:02:58.218 LIB libspdk_event_nbd.a 00:02:58.218 LIB libspdk_event_ublk.a 00:02:58.218 LIB libspdk_event_scsi.a 00:02:58.218 SO libspdk_event_nbd.so.5.0 00:02:58.218 SO libspdk_event_ublk.so.2.0 00:02:58.218 SO libspdk_event_scsi.so.5.0 00:02:58.218 LIB libspdk_event_nvmf.a 00:02:58.218 SYMLINK libspdk_event_nbd.so 00:02:58.218 SYMLINK libspdk_event_ublk.so 00:02:58.218 SYMLINK libspdk_event_scsi.so 00:02:58.218 SO libspdk_event_nvmf.so.5.0 00:02:58.475 SYMLINK libspdk_event_nvmf.so 00:02:58.475 CC module/event/subsystems/iscsi/iscsi.o 00:02:58.475 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:58.734 LIB libspdk_event_vhost_scsi.a 00:02:58.734 LIB libspdk_event_iscsi.a 00:02:58.734 SO libspdk_event_vhost_scsi.so.2.0 00:02:58.734 SO libspdk_event_iscsi.so.5.0 00:02:58.734 SYMLINK libspdk_event_iscsi.so 00:02:58.734 SYMLINK libspdk_event_vhost_scsi.so 00:02:58.992 SO libspdk.so.5.0 00:02:58.992 SYMLINK libspdk.so 00:02:58.992 CXX app/trace/trace.o 00:02:58.992 CC app/trace_record/trace_record.o 00:02:58.992 CC app/spdk_nvme_identify/identify.o 00:02:58.992 CC app/spdk_lspci/spdk_lspci.o 00:02:58.992 CC app/spdk_nvme_perf/perf.o 00:02:59.250 CC app/iscsi_tgt/iscsi_tgt.o 00:02:59.250 CC app/nvmf_tgt/nvmf_main.o 00:02:59.250 CC app/spdk_tgt/spdk_tgt.o 00:02:59.250 CC examples/accel/perf/accel_perf.o 00:02:59.250 CC test/accel/dif/dif.o 00:02:59.250 LINK spdk_lspci 00:02:59.250 LINK spdk_trace_record 00:02:59.509 LINK nvmf_tgt 00:02:59.509 LINK iscsi_tgt 00:02:59.509 LINK spdk_tgt 00:02:59.509 LINK spdk_trace 00:02:59.509 CC examples/bdev/hello_world/hello_bdev.o 00:02:59.509 CC examples/bdev/bdevperf/bdevperf.o 00:02:59.509 LINK dif 00:02:59.509 LINK accel_perf 00:02:59.766 CC app/spdk_nvme_discover/discovery_aer.o 00:02:59.766 CC app/spdk_top/spdk_top.o 00:02:59.766 CC app/vhost/vhost.o 00:02:59.766 LINK hello_bdev 00:02:59.766 CC examples/blob/hello_world/hello_blob.o 00:02:59.766 LINK spdk_nvme_discover 00:03:00.034 LINK spdk_nvme_identify 00:03:00.034 CC examples/ioat/perf/perf.o 00:03:00.034 LINK spdk_nvme_perf 00:03:00.034 LINK vhost 00:03:00.034 CC test/app/bdev_svc/bdev_svc.o 00:03:00.034 LINK hello_blob 00:03:00.034 CC test/app/jsoncat/jsoncat.o 00:03:00.034 LINK ioat_perf 00:03:00.034 CC test/app/histogram_perf/histogram_perf.o 00:03:00.034 CC test/app/stub/stub.o 00:03:00.034 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:00.034 LINK bdev_svc 00:03:00.291 LINK jsoncat 00:03:00.291 CC examples/blob/cli/blobcli.o 00:03:00.291 LINK histogram_perf 00:03:00.291 LINK stub 00:03:00.291 CC examples/ioat/verify/verify.o 00:03:00.291 LINK bdevperf 00:03:00.548 TEST_HEADER include/spdk/accel.h 00:03:00.548 TEST_HEADER include/spdk/accel_module.h 00:03:00.548 TEST_HEADER include/spdk/assert.h 00:03:00.548 TEST_HEADER include/spdk/barrier.h 00:03:00.548 TEST_HEADER include/spdk/base64.h 00:03:00.548 TEST_HEADER include/spdk/bdev.h 00:03:00.548 TEST_HEADER include/spdk/bdev_module.h 00:03:00.549 TEST_HEADER include/spdk/bdev_zone.h 00:03:00.549 TEST_HEADER include/spdk/bit_array.h 00:03:00.549 TEST_HEADER include/spdk/bit_pool.h 00:03:00.549 TEST_HEADER include/spdk/blob_bdev.h 00:03:00.549 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:00.549 TEST_HEADER include/spdk/blobfs.h 00:03:00.549 TEST_HEADER include/spdk/blob.h 00:03:00.549 CC test/bdev/bdevio/bdevio.o 00:03:00.549 TEST_HEADER include/spdk/conf.h 00:03:00.549 TEST_HEADER include/spdk/config.h 00:03:00.549 TEST_HEADER include/spdk/cpuset.h 00:03:00.549 TEST_HEADER include/spdk/crc16.h 00:03:00.549 TEST_HEADER include/spdk/crc32.h 00:03:00.549 TEST_HEADER include/spdk/crc64.h 00:03:00.549 LINK spdk_top 00:03:00.549 TEST_HEADER include/spdk/dif.h 00:03:00.549 TEST_HEADER include/spdk/dma.h 00:03:00.549 TEST_HEADER include/spdk/endian.h 00:03:00.549 CC test/blobfs/mkfs/mkfs.o 00:03:00.549 TEST_HEADER include/spdk/env_dpdk.h 00:03:00.549 TEST_HEADER include/spdk/env.h 00:03:00.549 TEST_HEADER include/spdk/event.h 00:03:00.549 TEST_HEADER include/spdk/fd_group.h 00:03:00.549 TEST_HEADER include/spdk/fd.h 00:03:00.549 LINK verify 00:03:00.549 TEST_HEADER include/spdk/file.h 00:03:00.549 LINK nvme_fuzz 00:03:00.549 TEST_HEADER include/spdk/ftl.h 00:03:00.549 TEST_HEADER include/spdk/gpt_spec.h 00:03:00.549 TEST_HEADER include/spdk/hexlify.h 00:03:00.549 TEST_HEADER include/spdk/histogram_data.h 00:03:00.549 TEST_HEADER include/spdk/idxd.h 00:03:00.549 TEST_HEADER include/spdk/idxd_spec.h 00:03:00.549 TEST_HEADER include/spdk/init.h 00:03:00.549 TEST_HEADER include/spdk/ioat.h 00:03:00.549 TEST_HEADER include/spdk/ioat_spec.h 00:03:00.549 CC examples/nvme/hello_world/hello_world.o 00:03:00.549 TEST_HEADER include/spdk/iscsi_spec.h 00:03:00.549 TEST_HEADER include/spdk/json.h 00:03:00.549 TEST_HEADER include/spdk/jsonrpc.h 00:03:00.549 TEST_HEADER include/spdk/likely.h 00:03:00.549 TEST_HEADER include/spdk/log.h 00:03:00.549 TEST_HEADER include/spdk/lvol.h 00:03:00.549 TEST_HEADER include/spdk/memory.h 00:03:00.549 TEST_HEADER include/spdk/mmio.h 00:03:00.549 TEST_HEADER include/spdk/nbd.h 00:03:00.549 TEST_HEADER include/spdk/notify.h 00:03:00.549 TEST_HEADER include/spdk/nvme.h 00:03:00.549 TEST_HEADER include/spdk/nvme_intel.h 00:03:00.549 CC examples/nvme/reconnect/reconnect.o 00:03:00.549 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:00.549 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:00.549 CC examples/sock/hello_world/hello_sock.o 00:03:00.549 TEST_HEADER include/spdk/nvme_spec.h 00:03:00.549 TEST_HEADER include/spdk/nvme_zns.h 00:03:00.549 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:00.549 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:00.549 TEST_HEADER include/spdk/nvmf.h 00:03:00.549 TEST_HEADER include/spdk/nvmf_spec.h 00:03:00.549 TEST_HEADER include/spdk/nvmf_transport.h 00:03:00.549 TEST_HEADER include/spdk/opal.h 00:03:00.549 TEST_HEADER include/spdk/opal_spec.h 00:03:00.549 TEST_HEADER include/spdk/pci_ids.h 00:03:00.549 TEST_HEADER include/spdk/pipe.h 00:03:00.549 TEST_HEADER include/spdk/queue.h 00:03:00.549 TEST_HEADER include/spdk/reduce.h 00:03:00.549 TEST_HEADER include/spdk/rpc.h 00:03:00.549 TEST_HEADER include/spdk/scheduler.h 00:03:00.806 TEST_HEADER include/spdk/scsi.h 00:03:00.806 TEST_HEADER include/spdk/scsi_spec.h 00:03:00.806 TEST_HEADER include/spdk/sock.h 00:03:00.806 TEST_HEADER include/spdk/stdinc.h 00:03:00.806 TEST_HEADER include/spdk/string.h 00:03:00.806 TEST_HEADER include/spdk/thread.h 00:03:00.806 TEST_HEADER include/spdk/trace.h 00:03:00.806 TEST_HEADER include/spdk/trace_parser.h 00:03:00.806 TEST_HEADER include/spdk/tree.h 00:03:00.806 TEST_HEADER include/spdk/ublk.h 00:03:00.806 TEST_HEADER include/spdk/util.h 00:03:00.806 TEST_HEADER include/spdk/uuid.h 00:03:00.806 TEST_HEADER include/spdk/version.h 00:03:00.806 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:00.806 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:00.806 TEST_HEADER include/spdk/vhost.h 00:03:00.806 LINK blobcli 00:03:00.806 TEST_HEADER include/spdk/vmd.h 00:03:00.806 TEST_HEADER include/spdk/xor.h 00:03:00.806 TEST_HEADER include/spdk/zipf.h 00:03:00.806 CXX test/cpp_headers/accel.o 00:03:00.806 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:00.806 LINK mkfs 00:03:00.806 CC app/spdk_dd/spdk_dd.o 00:03:00.806 LINK hello_world 00:03:00.806 LINK hello_sock 00:03:00.806 CC app/fio/nvme/fio_plugin.o 00:03:00.806 LINK bdevio 00:03:00.806 LINK reconnect 00:03:01.063 CXX test/cpp_headers/accel_module.o 00:03:01.063 CXX test/cpp_headers/assert.o 00:03:01.063 CC app/fio/bdev/fio_plugin.o 00:03:01.063 CXX test/cpp_headers/barrier.o 00:03:01.063 CC test/dma/test_dma/test_dma.o 00:03:01.063 LINK spdk_dd 00:03:01.063 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:01.063 CC test/env/mem_callbacks/mem_callbacks.o 00:03:01.063 CC examples/nvme/arbitration/arbitration.o 00:03:01.063 CC examples/nvme/hotplug/hotplug.o 00:03:01.320 CXX test/cpp_headers/base64.o 00:03:01.320 CXX test/cpp_headers/bdev.o 00:03:01.321 LINK spdk_nvme 00:03:01.321 LINK hotplug 00:03:01.612 LINK test_dma 00:03:01.612 CC test/env/vtophys/vtophys.o 00:03:01.612 LINK spdk_bdev 00:03:01.612 CXX test/cpp_headers/bdev_module.o 00:03:01.612 LINK arbitration 00:03:01.612 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:01.612 LINK nvme_manage 00:03:01.612 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:01.612 LINK vtophys 00:03:01.612 CXX test/cpp_headers/bdev_zone.o 00:03:01.612 LINK env_dpdk_post_init 00:03:01.612 CC examples/nvme/abort/abort.o 00:03:01.612 LINK mem_callbacks 00:03:01.868 CC test/env/memory/memory_ut.o 00:03:01.869 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:01.869 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:01.869 CXX test/cpp_headers/bit_array.o 00:03:01.869 LINK cmb_copy 00:03:01.869 CXX test/cpp_headers/bit_pool.o 00:03:01.869 CC test/env/pci/pci_ut.o 00:03:01.869 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:01.869 LINK pmr_persistence 00:03:01.869 CC examples/vmd/lsvmd/lsvmd.o 00:03:02.126 CXX test/cpp_headers/blob_bdev.o 00:03:02.126 CC examples/vmd/led/led.o 00:03:02.126 LINK abort 00:03:02.126 LINK lsvmd 00:03:02.126 CC examples/nvmf/nvmf/nvmf.o 00:03:02.126 CXX test/cpp_headers/blobfs_bdev.o 00:03:02.126 LINK led 00:03:02.383 LINK pci_ut 00:03:02.383 LINK vhost_fuzz 00:03:02.383 CC test/event/event_perf/event_perf.o 00:03:02.383 CXX test/cpp_headers/blobfs.o 00:03:02.383 CXX test/cpp_headers/blob.o 00:03:02.383 CC test/nvme/aer/aer.o 00:03:02.383 LINK iscsi_fuzz 00:03:02.383 CC test/lvol/esnap/esnap.o 00:03:02.383 LINK nvmf 00:03:02.383 CXX test/cpp_headers/conf.o 00:03:02.383 LINK event_perf 00:03:02.640 CC test/rpc_client/rpc_client_test.o 00:03:02.640 CC test/event/reactor/reactor.o 00:03:02.640 LINK memory_ut 00:03:02.640 CXX test/cpp_headers/config.o 00:03:02.640 LINK aer 00:03:02.640 CXX test/cpp_headers/cpuset.o 00:03:02.640 CXX test/cpp_headers/crc16.o 00:03:02.640 CC test/thread/poller_perf/poller_perf.o 00:03:02.640 LINK reactor 00:03:02.640 CC test/event/reactor_perf/reactor_perf.o 00:03:02.640 LINK rpc_client_test 00:03:02.640 CC examples/util/zipf/zipf.o 00:03:02.898 CXX test/cpp_headers/crc32.o 00:03:02.898 LINK poller_perf 00:03:02.898 CC test/nvme/reset/reset.o 00:03:02.898 CC test/nvme/sgl/sgl.o 00:03:02.898 LINK reactor_perf 00:03:02.898 CXX test/cpp_headers/crc64.o 00:03:02.898 CC test/event/app_repeat/app_repeat.o 00:03:02.898 LINK zipf 00:03:02.898 CC test/event/scheduler/scheduler.o 00:03:03.155 CC test/nvme/e2edp/nvme_dp.o 00:03:03.155 CXX test/cpp_headers/dif.o 00:03:03.155 CXX test/cpp_headers/dma.o 00:03:03.155 LINK app_repeat 00:03:03.155 LINK reset 00:03:03.155 CC examples/thread/thread/thread_ex.o 00:03:03.155 CC examples/idxd/perf/perf.o 00:03:03.155 LINK sgl 00:03:03.155 LINK scheduler 00:03:03.155 CXX test/cpp_headers/endian.o 00:03:03.413 LINK nvme_dp 00:03:03.413 CC test/nvme/overhead/overhead.o 00:03:03.413 CC test/nvme/err_injection/err_injection.o 00:03:03.413 CC test/nvme/startup/startup.o 00:03:03.413 LINK thread 00:03:03.413 CXX test/cpp_headers/env_dpdk.o 00:03:03.413 CC test/nvme/reserve/reserve.o 00:03:03.413 CXX test/cpp_headers/env.o 00:03:03.413 LINK idxd_perf 00:03:03.413 LINK err_injection 00:03:03.413 LINK startup 00:03:03.671 CXX test/cpp_headers/event.o 00:03:03.672 LINK overhead 00:03:03.672 CXX test/cpp_headers/fd_group.o 00:03:03.672 LINK reserve 00:03:03.672 CXX test/cpp_headers/fd.o 00:03:03.672 CXX test/cpp_headers/file.o 00:03:03.672 CC test/nvme/simple_copy/simple_copy.o 00:03:03.672 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:03.672 CXX test/cpp_headers/ftl.o 00:03:03.672 CXX test/cpp_headers/gpt_spec.o 00:03:03.930 CXX test/cpp_headers/hexlify.o 00:03:03.930 CXX test/cpp_headers/histogram_data.o 00:03:03.930 CC test/nvme/connect_stress/connect_stress.o 00:03:03.930 CXX test/cpp_headers/idxd.o 00:03:03.930 LINK interrupt_tgt 00:03:03.930 LINK simple_copy 00:03:03.930 CXX test/cpp_headers/idxd_spec.o 00:03:03.930 CXX test/cpp_headers/init.o 00:03:03.930 CXX test/cpp_headers/ioat.o 00:03:03.930 CXX test/cpp_headers/ioat_spec.o 00:03:03.930 LINK connect_stress 00:03:03.930 CXX test/cpp_headers/iscsi_spec.o 00:03:04.188 CXX test/cpp_headers/json.o 00:03:04.188 CXX test/cpp_headers/jsonrpc.o 00:03:04.188 CXX test/cpp_headers/likely.o 00:03:04.188 CXX test/cpp_headers/log.o 00:03:04.188 CXX test/cpp_headers/lvol.o 00:03:04.188 CC test/nvme/boot_partition/boot_partition.o 00:03:04.188 CXX test/cpp_headers/memory.o 00:03:04.188 CC test/nvme/compliance/nvme_compliance.o 00:03:04.188 CXX test/cpp_headers/mmio.o 00:03:04.447 CC test/nvme/fused_ordering/fused_ordering.o 00:03:04.447 LINK boot_partition 00:03:04.447 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:04.447 CC test/nvme/fdp/fdp.o 00:03:04.447 CC test/nvme/cuse/cuse.o 00:03:04.447 CXX test/cpp_headers/nbd.o 00:03:04.447 CXX test/cpp_headers/notify.o 00:03:04.447 CXX test/cpp_headers/nvme.o 00:03:04.705 CXX test/cpp_headers/nvme_intel.o 00:03:04.705 LINK fused_ordering 00:03:04.705 LINK nvme_compliance 00:03:04.705 LINK doorbell_aers 00:03:04.705 CXX test/cpp_headers/nvme_ocssd.o 00:03:04.705 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:04.705 LINK fdp 00:03:04.705 CXX test/cpp_headers/nvme_spec.o 00:03:04.705 CXX test/cpp_headers/nvme_zns.o 00:03:04.705 CXX test/cpp_headers/nvmf_cmd.o 00:03:04.705 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:04.963 CXX test/cpp_headers/nvmf.o 00:03:04.963 CXX test/cpp_headers/nvmf_spec.o 00:03:04.963 CXX test/cpp_headers/nvmf_transport.o 00:03:04.963 CXX test/cpp_headers/opal.o 00:03:04.963 CXX test/cpp_headers/opal_spec.o 00:03:04.963 CXX test/cpp_headers/pci_ids.o 00:03:04.963 CXX test/cpp_headers/pipe.o 00:03:05.221 CXX test/cpp_headers/queue.o 00:03:05.221 CXX test/cpp_headers/reduce.o 00:03:05.221 CXX test/cpp_headers/rpc.o 00:03:05.221 CXX test/cpp_headers/scsi.o 00:03:05.221 CXX test/cpp_headers/scheduler.o 00:03:05.221 CXX test/cpp_headers/scsi_spec.o 00:03:05.221 CXX test/cpp_headers/sock.o 00:03:05.221 CXX test/cpp_headers/stdinc.o 00:03:05.221 CXX test/cpp_headers/string.o 00:03:05.221 CXX test/cpp_headers/thread.o 00:03:05.479 CXX test/cpp_headers/trace.o 00:03:05.479 CXX test/cpp_headers/trace_parser.o 00:03:05.479 CXX test/cpp_headers/tree.o 00:03:05.479 CXX test/cpp_headers/ublk.o 00:03:05.479 CXX test/cpp_headers/util.o 00:03:05.479 CXX test/cpp_headers/uuid.o 00:03:05.479 CXX test/cpp_headers/version.o 00:03:05.479 CXX test/cpp_headers/vfio_user_pci.o 00:03:05.479 CXX test/cpp_headers/vfio_user_spec.o 00:03:05.479 CXX test/cpp_headers/vhost.o 00:03:05.479 CXX test/cpp_headers/vmd.o 00:03:05.479 CXX test/cpp_headers/xor.o 00:03:05.479 LINK cuse 00:03:05.479 CXX test/cpp_headers/zipf.o 00:03:06.858 LINK esnap 00:03:10.142 00:03:10.143 real 1m3.993s 00:03:10.143 user 6m36.434s 00:03:10.143 sys 1m35.683s 00:03:10.143 06:55:53 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:10.143 06:55:53 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.143 ************************************ 00:03:10.143 END TEST make 00:03:10.143 ************************************ 00:03:10.143 06:55:54 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:10.143 06:55:54 -- nvmf/common.sh@7 -- # uname -s 00:03:10.143 06:55:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:10.143 06:55:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:10.143 06:55:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:10.143 06:55:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:10.143 06:55:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:10.143 06:55:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:10.143 06:55:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:10.143 06:55:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:10.143 06:55:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:10.143 06:55:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:10.143 06:55:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:03:10.143 06:55:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:03:10.143 06:55:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:10.143 06:55:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:10.143 06:55:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:10.143 06:55:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:10.143 06:55:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:10.143 06:55:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:10.143 06:55:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:10.143 06:55:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.143 06:55:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.143 06:55:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.143 06:55:54 -- paths/export.sh@5 -- # export PATH 00:03:10.143 06:55:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.143 06:55:54 -- nvmf/common.sh@46 -- # : 0 00:03:10.143 06:55:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:10.143 06:55:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:10.143 06:55:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:10.143 06:55:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:10.143 06:55:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:10.143 06:55:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:10.143 06:55:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:10.143 06:55:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:10.143 06:55:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:10.143 06:55:54 -- spdk/autotest.sh@32 -- # uname -s 00:03:10.143 06:55:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:10.143 06:55:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:10.143 06:55:54 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:10.143 06:55:54 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:10.143 06:55:54 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:10.143 06:55:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:10.143 06:55:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:10.143 06:55:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:10.143 06:55:54 -- spdk/autotest.sh@48 -- # udevadm_pid=49675 00:03:10.143 06:55:54 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:10.143 06:55:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:10.143 06:55:54 -- spdk/autotest.sh@54 -- # echo 49687 00:03:10.143 06:55:54 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:10.143 06:55:54 -- spdk/autotest.sh@56 -- # echo 49691 00:03:10.143 06:55:54 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:10.143 06:55:54 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:10.143 06:55:54 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:10.143 06:55:54 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:10.143 06:55:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:10.143 06:55:54 -- common/autotest_common.sh@10 -- # set +x 00:03:10.143 06:55:54 -- spdk/autotest.sh@70 -- # create_test_list 00:03:10.143 06:55:54 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:10.143 06:55:54 -- common/autotest_common.sh@10 -- # set +x 00:03:10.401 06:55:54 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:10.401 06:55:54 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:10.401 06:55:54 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:10.401 06:55:54 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:10.401 06:55:54 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:10.401 06:55:54 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:10.401 06:55:54 -- common/autotest_common.sh@1440 -- # uname 00:03:10.401 06:55:54 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:10.401 06:55:54 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:10.401 06:55:54 -- common/autotest_common.sh@1460 -- # uname 00:03:10.401 06:55:54 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:10.401 06:55:54 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:10.401 06:55:54 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:10.401 06:55:54 -- spdk/autotest.sh@83 -- # hash lcov 00:03:10.401 06:55:54 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:10.401 06:55:54 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:10.401 --rc lcov_branch_coverage=1 00:03:10.401 --rc lcov_function_coverage=1 00:03:10.401 --rc genhtml_branch_coverage=1 00:03:10.401 --rc genhtml_function_coverage=1 00:03:10.401 --rc genhtml_legend=1 00:03:10.401 --rc geninfo_all_blocks=1 00:03:10.401 ' 00:03:10.401 06:55:54 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:10.401 --rc lcov_branch_coverage=1 00:03:10.401 --rc lcov_function_coverage=1 00:03:10.401 --rc genhtml_branch_coverage=1 00:03:10.401 --rc genhtml_function_coverage=1 00:03:10.401 --rc genhtml_legend=1 00:03:10.401 --rc geninfo_all_blocks=1 00:03:10.401 ' 00:03:10.401 06:55:54 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:10.401 --rc lcov_branch_coverage=1 00:03:10.401 --rc lcov_function_coverage=1 00:03:10.401 --rc genhtml_branch_coverage=1 00:03:10.401 --rc genhtml_function_coverage=1 00:03:10.401 --rc genhtml_legend=1 00:03:10.401 --rc geninfo_all_blocks=1 00:03:10.401 --no-external' 00:03:10.401 06:55:54 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:10.401 --rc lcov_branch_coverage=1 00:03:10.401 --rc lcov_function_coverage=1 00:03:10.401 --rc genhtml_branch_coverage=1 00:03:10.401 --rc genhtml_function_coverage=1 00:03:10.401 --rc genhtml_legend=1 00:03:10.401 --rc geninfo_all_blocks=1 00:03:10.401 --no-external' 00:03:10.401 06:55:54 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:10.401 lcov: LCOV version 1.14 00:03:10.401 06:55:54 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:18.540 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:18.540 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:18.540 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:18.540 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:18.540 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:18.540 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:36.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:36.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:36.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:36.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:38.000 06:56:22 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:38.000 06:56:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:38.000 06:56:22 -- common/autotest_common.sh@10 -- # set +x 00:03:38.000 06:56:22 -- spdk/autotest.sh@102 -- # rm -f 00:03:38.000 06:56:22 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:38.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:38.933 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:38.933 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:38.933 06:56:22 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:38.933 06:56:22 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:38.933 06:56:22 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:38.933 06:56:22 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:38.933 06:56:22 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:38.933 06:56:22 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:38.933 06:56:22 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:38.933 06:56:22 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.933 06:56:22 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:38.933 06:56:22 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:38.933 06:56:22 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:38.933 06:56:22 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:38.933 06:56:22 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:38.933 06:56:22 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:38.933 06:56:22 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:38.933 06:56:22 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:03:38.933 06:56:22 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:03:38.933 06:56:22 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:38.933 06:56:22 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:38.933 06:56:22 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:38.933 06:56:22 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:03:38.933 06:56:22 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:03:38.933 06:56:22 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:38.933 06:56:22 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:38.933 06:56:22 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:38.933 06:56:22 -- spdk/autotest.sh@121 -- # grep -v p 00:03:38.933 06:56:22 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:03:38.933 06:56:22 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:38.933 06:56:22 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:38.933 06:56:22 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:38.933 06:56:22 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:38.933 06:56:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:38.933 No valid GPT data, bailing 00:03:38.933 06:56:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:38.933 06:56:22 -- scripts/common.sh@393 -- # pt= 00:03:38.933 06:56:22 -- scripts/common.sh@394 -- # return 1 00:03:38.933 06:56:22 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:38.933 1+0 records in 00:03:38.933 1+0 records out 00:03:38.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00538594 s, 195 MB/s 00:03:38.933 06:56:22 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:38.933 06:56:22 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:38.933 06:56:22 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:03:38.933 06:56:22 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:38.933 06:56:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:38.933 No valid GPT data, bailing 00:03:38.933 06:56:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:38.933 06:56:22 -- scripts/common.sh@393 -- # pt= 00:03:38.933 06:56:22 -- scripts/common.sh@394 -- # return 1 00:03:38.933 06:56:22 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:38.933 1+0 records in 00:03:38.933 1+0 records out 00:03:38.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00580966 s, 180 MB/s 00:03:38.934 06:56:22 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:38.934 06:56:22 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:38.934 06:56:22 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:03:38.934 06:56:22 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:03:38.934 06:56:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:39.192 No valid GPT data, bailing 00:03:39.192 06:56:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:39.192 06:56:23 -- scripts/common.sh@393 -- # pt= 00:03:39.192 06:56:23 -- scripts/common.sh@394 -- # return 1 00:03:39.192 06:56:23 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:39.192 1+0 records in 00:03:39.192 1+0 records out 00:03:39.192 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525212 s, 200 MB/s 00:03:39.192 06:56:23 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:39.192 06:56:23 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:39.192 06:56:23 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:03:39.192 06:56:23 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:03:39.192 06:56:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:39.192 No valid GPT data, bailing 00:03:39.192 06:56:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:39.192 06:56:23 -- scripts/common.sh@393 -- # pt= 00:03:39.192 06:56:23 -- scripts/common.sh@394 -- # return 1 00:03:39.192 06:56:23 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:39.192 1+0 records in 00:03:39.192 1+0 records out 00:03:39.192 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00577116 s, 182 MB/s 00:03:39.192 06:56:23 -- spdk/autotest.sh@129 -- # sync 00:03:39.192 06:56:23 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:39.192 06:56:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:39.192 06:56:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:41.095 06:56:24 -- spdk/autotest.sh@135 -- # uname -s 00:03:41.095 06:56:24 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:41.095 06:56:24 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:41.095 06:56:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.095 06:56:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.095 06:56:24 -- common/autotest_common.sh@10 -- # set +x 00:03:41.095 ************************************ 00:03:41.095 START TEST setup.sh 00:03:41.095 ************************************ 00:03:41.095 06:56:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:41.095 * Looking for test storage... 00:03:41.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:41.095 06:56:25 -- setup/test-setup.sh@10 -- # uname -s 00:03:41.095 06:56:25 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:41.095 06:56:25 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:41.095 06:56:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.095 06:56:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.095 06:56:25 -- common/autotest_common.sh@10 -- # set +x 00:03:41.095 ************************************ 00:03:41.095 START TEST acl 00:03:41.095 ************************************ 00:03:41.095 06:56:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:41.354 * Looking for test storage... 00:03:41.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:41.354 06:56:25 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:41.354 06:56:25 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:41.354 06:56:25 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:41.354 06:56:25 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:41.354 06:56:25 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:41.354 06:56:25 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:41.354 06:56:25 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:41.354 06:56:25 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.354 06:56:25 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:41.354 06:56:25 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:41.354 06:56:25 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:41.354 06:56:25 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:41.354 06:56:25 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:41.354 06:56:25 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:41.354 06:56:25 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:41.354 06:56:25 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:03:41.354 06:56:25 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:03:41.354 06:56:25 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:41.354 06:56:25 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:41.354 06:56:25 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:41.354 06:56:25 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:03:41.354 06:56:25 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:03:41.354 06:56:25 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:41.354 06:56:25 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:41.354 06:56:25 -- setup/acl.sh@12 -- # devs=() 00:03:41.354 06:56:25 -- setup/acl.sh@12 -- # declare -a devs 00:03:41.354 06:56:25 -- setup/acl.sh@13 -- # drivers=() 00:03:41.354 06:56:25 -- setup/acl.sh@13 -- # declare -A drivers 00:03:41.354 06:56:25 -- setup/acl.sh@51 -- # setup reset 00:03:41.354 06:56:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.354 06:56:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:41.921 06:56:25 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:41.921 06:56:25 -- setup/acl.sh@16 -- # local dev driver 00:03:41.921 06:56:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.921 06:56:25 -- setup/acl.sh@15 -- # setup output status 00:03:41.921 06:56:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.921 06:56:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:42.179 Hugepages 00:03:42.179 node hugesize free / total 00:03:42.179 06:56:26 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:42.179 06:56:26 -- setup/acl.sh@19 -- # continue 00:03:42.179 06:56:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.179 00:03:42.179 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:42.179 06:56:26 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:42.179 06:56:26 -- setup/acl.sh@19 -- # continue 00:03:42.179 06:56:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.179 06:56:26 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:42.179 06:56:26 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:42.179 06:56:26 -- setup/acl.sh@20 -- # continue 00:03:42.179 06:56:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.179 06:56:26 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:42.179 06:56:26 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:42.179 06:56:26 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:42.179 06:56:26 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:42.179 06:56:26 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:42.179 06:56:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.438 06:56:26 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:42.438 06:56:26 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:42.438 06:56:26 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:42.438 06:56:26 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:42.438 06:56:26 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:42.438 06:56:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.438 06:56:26 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:42.438 06:56:26 -- setup/acl.sh@54 -- # run_test denied denied 00:03:42.438 06:56:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.438 06:56:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.438 06:56:26 -- common/autotest_common.sh@10 -- # set +x 00:03:42.438 ************************************ 00:03:42.438 START TEST denied 00:03:42.438 ************************************ 00:03:42.438 06:56:26 -- common/autotest_common.sh@1104 -- # denied 00:03:42.438 06:56:26 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:42.438 06:56:26 -- setup/acl.sh@38 -- # setup output config 00:03:42.438 06:56:26 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:42.438 06:56:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.438 06:56:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:43.374 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:43.374 06:56:27 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:43.374 06:56:27 -- setup/acl.sh@28 -- # local dev driver 00:03:43.374 06:56:27 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:43.374 06:56:27 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:43.374 06:56:27 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:43.374 06:56:27 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:43.374 06:56:27 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:43.374 06:56:27 -- setup/acl.sh@41 -- # setup reset 00:03:43.374 06:56:27 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.374 06:56:27 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:43.941 ************************************ 00:03:43.941 END TEST denied 00:03:43.941 ************************************ 00:03:43.941 00:03:43.941 real 0m1.487s 00:03:43.941 user 0m0.592s 00:03:43.941 sys 0m0.831s 00:03:43.941 06:56:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.941 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:03:43.941 06:56:27 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:43.941 06:56:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.941 06:56:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.941 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:03:43.941 ************************************ 00:03:43.941 START TEST allowed 00:03:43.941 ************************************ 00:03:43.941 06:56:27 -- common/autotest_common.sh@1104 -- # allowed 00:03:43.941 06:56:27 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:43.941 06:56:27 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:43.941 06:56:27 -- setup/acl.sh@45 -- # setup output config 00:03:43.941 06:56:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.941 06:56:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:44.873 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.873 06:56:28 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:03:44.873 06:56:28 -- setup/acl.sh@28 -- # local dev driver 00:03:44.873 06:56:28 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:44.873 06:56:28 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:44.873 06:56:28 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:44.873 06:56:28 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:44.873 06:56:28 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:44.873 06:56:28 -- setup/acl.sh@48 -- # setup reset 00:03:44.873 06:56:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.873 06:56:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:45.439 00:03:45.439 real 0m1.563s 00:03:45.439 user 0m0.703s 00:03:45.439 sys 0m0.834s 00:03:45.439 06:56:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.439 ************************************ 00:03:45.439 END TEST allowed 00:03:45.439 ************************************ 00:03:45.439 06:56:29 -- common/autotest_common.sh@10 -- # set +x 00:03:45.439 00:03:45.439 real 0m4.355s 00:03:45.439 user 0m1.864s 00:03:45.439 sys 0m2.421s 00:03:45.439 06:56:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.439 ************************************ 00:03:45.439 END TEST acl 00:03:45.439 ************************************ 00:03:45.439 06:56:29 -- common/autotest_common.sh@10 -- # set +x 00:03:45.439 06:56:29 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:45.439 06:56:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.439 06:56:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.439 06:56:29 -- common/autotest_common.sh@10 -- # set +x 00:03:45.698 ************************************ 00:03:45.698 START TEST hugepages 00:03:45.698 ************************************ 00:03:45.698 06:56:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:45.698 * Looking for test storage... 00:03:45.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:45.698 06:56:29 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:45.698 06:56:29 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:45.698 06:56:29 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:45.698 06:56:29 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:45.698 06:56:29 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:45.698 06:56:29 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:45.698 06:56:29 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:45.698 06:56:29 -- setup/common.sh@18 -- # local node= 00:03:45.698 06:56:29 -- setup/common.sh@19 -- # local var val 00:03:45.698 06:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.698 06:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.698 06:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.698 06:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.698 06:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.698 06:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 06:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5908916 kB' 'MemAvailable: 7401180 kB' 'Buffers: 2436 kB' 'Cached: 1703996 kB' 'SwapCached: 0 kB' 'Active: 475532 kB' 'Inactive: 1333744 kB' 'Active(anon): 113332 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 104508 kB' 'Mapped: 48860 kB' 'Shmem: 10488 kB' 'KReclaimable: 66968 kB' 'Slab: 140328 kB' 'SReclaimable: 66968 kB' 'SUnreclaim: 73360 kB' 'KernelStack: 6284 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 333200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.698 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.698 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.699 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.699 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 06:56:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.700 06:56:29 -- setup/common.sh@32 -- # continue 00:03:45.700 06:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.700 06:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.700 06:56:29 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.700 06:56:29 -- setup/common.sh@33 -- # echo 2048 00:03:45.700 06:56:29 -- setup/common.sh@33 -- # return 0 00:03:45.700 06:56:29 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:45.700 06:56:29 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:45.700 06:56:29 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:45.700 06:56:29 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:45.700 06:56:29 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:45.700 06:56:29 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:45.700 06:56:29 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:45.700 06:56:29 -- setup/hugepages.sh@207 -- # get_nodes 00:03:45.700 06:56:29 -- setup/hugepages.sh@27 -- # local node 00:03:45.700 06:56:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.700 06:56:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:45.700 06:56:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.700 06:56:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.700 06:56:29 -- setup/hugepages.sh@208 -- # clear_hp 00:03:45.700 06:56:29 -- setup/hugepages.sh@37 -- # local node hp 00:03:45.700 06:56:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.700 06:56:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.700 06:56:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.700 06:56:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.700 06:56:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.700 06:56:29 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:45.700 06:56:29 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:45.700 06:56:29 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:45.700 06:56:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.700 06:56:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.700 06:56:29 -- common/autotest_common.sh@10 -- # set +x 00:03:45.700 ************************************ 00:03:45.700 START TEST default_setup 00:03:45.700 ************************************ 00:03:45.700 06:56:29 -- common/autotest_common.sh@1104 -- # default_setup 00:03:45.700 06:56:29 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:45.700 06:56:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.700 06:56:29 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.700 06:56:29 -- setup/hugepages.sh@51 -- # shift 00:03:45.700 06:56:29 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.700 06:56:29 -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.700 06:56:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.700 06:56:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.700 06:56:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.700 06:56:29 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.700 06:56:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.700 06:56:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.700 06:56:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.700 06:56:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.700 06:56:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.700 06:56:29 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.700 06:56:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.700 06:56:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:45.700 06:56:29 -- setup/hugepages.sh@73 -- # return 0 00:03:45.700 06:56:29 -- setup/hugepages.sh@137 -- # setup output 00:03:45.700 06:56:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.700 06:56:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.265 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.524 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.524 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.524 06:56:30 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:46.524 06:56:30 -- setup/hugepages.sh@89 -- # local node 00:03:46.524 06:56:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.524 06:56:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.524 06:56:30 -- setup/hugepages.sh@92 -- # local surp 00:03:46.524 06:56:30 -- setup/hugepages.sh@93 -- # local resv 00:03:46.524 06:56:30 -- setup/hugepages.sh@94 -- # local anon 00:03:46.524 06:56:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.524 06:56:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.524 06:56:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.524 06:56:30 -- setup/common.sh@18 -- # local node= 00:03:46.524 06:56:30 -- setup/common.sh@19 -- # local var val 00:03:46.524 06:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.524 06:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.524 06:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.524 06:56:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.524 06:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.524 06:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.524 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.524 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8018324 kB' 'MemAvailable: 9510436 kB' 'Buffers: 2436 kB' 'Cached: 1703988 kB' 'SwapCached: 0 kB' 'Active: 491476 kB' 'Inactive: 1333752 kB' 'Active(anon): 129276 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333752 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120488 kB' 'Mapped: 48940 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 140020 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73372 kB' 'KernelStack: 6304 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.525 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.525 06:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.525 06:56:30 -- setup/common.sh@33 -- # echo 0 00:03:46.525 06:56:30 -- setup/common.sh@33 -- # return 0 00:03:46.525 06:56:30 -- setup/hugepages.sh@97 -- # anon=0 00:03:46.525 06:56:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.525 06:56:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.525 06:56:30 -- setup/common.sh@18 -- # local node= 00:03:46.525 06:56:30 -- setup/common.sh@19 -- # local var val 00:03:46.525 06:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.525 06:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.525 06:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.526 06:56:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.526 06:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.526 06:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8018328 kB' 'MemAvailable: 9510440 kB' 'Buffers: 2436 kB' 'Cached: 1703988 kB' 'SwapCached: 0 kB' 'Active: 490804 kB' 'Inactive: 1333752 kB' 'Active(anon): 128604 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333752 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119776 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 140008 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73360 kB' 'KernelStack: 6256 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.526 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.526 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.527 06:56:30 -- setup/common.sh@33 -- # echo 0 00:03:46.527 06:56:30 -- setup/common.sh@33 -- # return 0 00:03:46.527 06:56:30 -- setup/hugepages.sh@99 -- # surp=0 00:03:46.527 06:56:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.527 06:56:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.527 06:56:30 -- setup/common.sh@18 -- # local node= 00:03:46.527 06:56:30 -- setup/common.sh@19 -- # local var val 00:03:46.527 06:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.527 06:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.527 06:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.527 06:56:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.527 06:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.527 06:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8018328 kB' 'MemAvailable: 9510440 kB' 'Buffers: 2436 kB' 'Cached: 1703988 kB' 'SwapCached: 0 kB' 'Active: 490860 kB' 'Inactive: 1333752 kB' 'Active(anon): 128660 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333752 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119820 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 140008 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73360 kB' 'KernelStack: 6256 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.527 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.527 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.528 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.528 06:56:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.528 06:56:30 -- setup/common.sh@33 -- # echo 0 00:03:46.528 06:56:30 -- setup/common.sh@33 -- # return 0 00:03:46.528 06:56:30 -- setup/hugepages.sh@100 -- # resv=0 00:03:46.528 nr_hugepages=1024 00:03:46.528 06:56:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.528 resv_hugepages=0 00:03:46.528 06:56:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.528 surplus_hugepages=0 00:03:46.528 06:56:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.528 anon_hugepages=0 00:03:46.528 06:56:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.528 06:56:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.528 06:56:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.528 06:56:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.528 06:56:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.528 06:56:30 -- setup/common.sh@18 -- # local node= 00:03:46.528 06:56:30 -- setup/common.sh@19 -- # local var val 00:03:46.528 06:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.528 06:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.528 06:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.528 06:56:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.528 06:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.528 06:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 06:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8018328 kB' 'MemAvailable: 9510440 kB' 'Buffers: 2436 kB' 'Cached: 1703988 kB' 'SwapCached: 0 kB' 'Active: 490860 kB' 'Inactive: 1333752 kB' 'Active(anon): 128660 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333752 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119856 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 140004 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73356 kB' 'KernelStack: 6272 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.788 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.788 06:56:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.789 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.789 06:56:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.789 06:56:30 -- setup/common.sh@33 -- # echo 1024 00:03:46.789 06:56:30 -- setup/common.sh@33 -- # return 0 00:03:46.789 06:56:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.789 06:56:30 -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.789 06:56:30 -- setup/hugepages.sh@27 -- # local node 00:03:46.789 06:56:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.789 06:56:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.789 06:56:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.789 06:56:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.789 06:56:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.790 06:56:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.790 06:56:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.790 06:56:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.790 06:56:30 -- setup/common.sh@18 -- # local node=0 00:03:46.790 06:56:30 -- setup/common.sh@19 -- # local var val 00:03:46.790 06:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.790 06:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.790 06:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.790 06:56:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.790 06:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.790 06:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8018328 kB' 'MemUsed: 4223644 kB' 'SwapCached: 0 kB' 'Active: 490808 kB' 'Inactive: 1333752 kB' 'Active(anon): 128608 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333752 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1706424 kB' 'Mapped: 48812 kB' 'AnonPages: 119764 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66648 kB' 'Slab: 140004 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # continue 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.790 06:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.790 06:56:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.790 06:56:30 -- setup/common.sh@33 -- # echo 0 00:03:46.790 06:56:30 -- setup/common.sh@33 -- # return 0 00:03:46.790 06:56:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.790 06:56:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.791 06:56:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.791 06:56:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.791 node0=1024 expecting 1024 00:03:46.791 06:56:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.791 06:56:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.791 00:03:46.791 real 0m0.988s 00:03:46.791 user 0m0.479s 00:03:46.791 sys 0m0.466s 00:03:46.791 06:56:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.791 06:56:30 -- common/autotest_common.sh@10 -- # set +x 00:03:46.791 ************************************ 00:03:46.791 END TEST default_setup 00:03:46.791 ************************************ 00:03:46.791 06:56:30 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:46.791 06:56:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:46.791 06:56:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:46.791 06:56:30 -- common/autotest_common.sh@10 -- # set +x 00:03:46.791 ************************************ 00:03:46.791 START TEST per_node_1G_alloc 00:03:46.791 ************************************ 00:03:46.791 06:56:30 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:46.791 06:56:30 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:46.791 06:56:30 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:46.791 06:56:30 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:46.791 06:56:30 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.791 06:56:30 -- setup/hugepages.sh@51 -- # shift 00:03:46.791 06:56:30 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.791 06:56:30 -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.791 06:56:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.791 06:56:30 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:46.791 06:56:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.791 06:56:30 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.791 06:56:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.791 06:56:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:46.791 06:56:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.791 06:56:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.791 06:56:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.791 06:56:30 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.791 06:56:30 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.791 06:56:30 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.791 06:56:30 -- setup/hugepages.sh@73 -- # return 0 00:03:46.791 06:56:30 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:46.791 06:56:30 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:46.791 06:56:30 -- setup/hugepages.sh@146 -- # setup output 00:03:46.791 06:56:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.791 06:56:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.050 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.050 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.050 06:56:31 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:47.050 06:56:31 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:47.050 06:56:31 -- setup/hugepages.sh@89 -- # local node 00:03:47.050 06:56:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.050 06:56:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.050 06:56:31 -- setup/hugepages.sh@92 -- # local surp 00:03:47.050 06:56:31 -- setup/hugepages.sh@93 -- # local resv 00:03:47.050 06:56:31 -- setup/hugepages.sh@94 -- # local anon 00:03:47.050 06:56:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.050 06:56:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.050 06:56:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.050 06:56:31 -- setup/common.sh@18 -- # local node= 00:03:47.050 06:56:31 -- setup/common.sh@19 -- # local var val 00:03:47.050 06:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.050 06:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.050 06:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.050 06:56:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.050 06:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.050 06:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 06:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9065924 kB' 'MemAvailable: 10558044 kB' 'Buffers: 2436 kB' 'Cached: 1703988 kB' 'SwapCached: 0 kB' 'Active: 491392 kB' 'Inactive: 1333760 kB' 'Active(anon): 129192 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333760 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120368 kB' 'Mapped: 49060 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 140004 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73356 kB' 'KernelStack: 6296 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.050 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.050 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.051 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.051 06:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.051 06:56:31 -- setup/common.sh@33 -- # echo 0 00:03:47.051 06:56:31 -- setup/common.sh@33 -- # return 0 00:03:47.051 06:56:31 -- setup/hugepages.sh@97 -- # anon=0 00:03:47.051 06:56:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.051 06:56:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.051 06:56:31 -- setup/common.sh@18 -- # local node= 00:03:47.051 06:56:31 -- setup/common.sh@19 -- # local var val 00:03:47.051 06:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.051 06:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.051 06:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.051 06:56:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.051 06:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.051 06:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9066372 kB' 'MemAvailable: 10558492 kB' 'Buffers: 2436 kB' 'Cached: 1703988 kB' 'SwapCached: 0 kB' 'Active: 491016 kB' 'Inactive: 1333760 kB' 'Active(anon): 128816 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333760 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119996 kB' 'Mapped: 49060 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 140004 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73356 kB' 'KernelStack: 6280 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.313 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.313 06:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.314 06:56:31 -- setup/common.sh@33 -- # echo 0 00:03:47.314 06:56:31 -- setup/common.sh@33 -- # return 0 00:03:47.314 06:56:31 -- setup/hugepages.sh@99 -- # surp=0 00:03:47.314 06:56:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.314 06:56:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.314 06:56:31 -- setup/common.sh@18 -- # local node= 00:03:47.314 06:56:31 -- setup/common.sh@19 -- # local var val 00:03:47.314 06:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.314 06:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.314 06:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.314 06:56:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.314 06:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.314 06:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9066372 kB' 'MemAvailable: 10558492 kB' 'Buffers: 2436 kB' 'Cached: 1703988 kB' 'SwapCached: 0 kB' 'Active: 491068 kB' 'Inactive: 1333760 kB' 'Active(anon): 128868 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333760 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120028 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139996 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73348 kB' 'KernelStack: 6256 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.314 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.314 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.315 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.315 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.315 06:56:31 -- setup/common.sh@33 -- # echo 0 00:03:47.315 06:56:31 -- setup/common.sh@33 -- # return 0 00:03:47.315 06:56:31 -- setup/hugepages.sh@100 -- # resv=0 00:03:47.315 nr_hugepages=512 00:03:47.315 06:56:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:47.316 resv_hugepages=0 00:03:47.316 06:56:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.316 surplus_hugepages=0 00:03:47.316 06:56:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.316 anon_hugepages=0 00:03:47.316 06:56:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.316 06:56:31 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:47.316 06:56:31 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:47.316 06:56:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.316 06:56:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.316 06:56:31 -- setup/common.sh@18 -- # local node= 00:03:47.316 06:56:31 -- setup/common.sh@19 -- # local var val 00:03:47.316 06:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.316 06:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.316 06:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.316 06:56:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.316 06:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.316 06:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9066372 kB' 'MemAvailable: 10558492 kB' 'Buffers: 2436 kB' 'Cached: 1703988 kB' 'SwapCached: 0 kB' 'Active: 491036 kB' 'Inactive: 1333760 kB' 'Active(anon): 128836 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333760 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119988 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139992 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73344 kB' 'KernelStack: 6240 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.316 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.316 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.317 06:56:31 -- setup/common.sh@33 -- # echo 512 00:03:47.317 06:56:31 -- setup/common.sh@33 -- # return 0 00:03:47.317 06:56:31 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:47.317 06:56:31 -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.317 06:56:31 -- setup/hugepages.sh@27 -- # local node 00:03:47.317 06:56:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.317 06:56:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.317 06:56:31 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.317 06:56:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.317 06:56:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.317 06:56:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.317 06:56:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.317 06:56:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.317 06:56:31 -- setup/common.sh@18 -- # local node=0 00:03:47.317 06:56:31 -- setup/common.sh@19 -- # local var val 00:03:47.317 06:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.317 06:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.317 06:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.317 06:56:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.317 06:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.317 06:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9066120 kB' 'MemUsed: 3175852 kB' 'SwapCached: 0 kB' 'Active: 491024 kB' 'Inactive: 1333760 kB' 'Active(anon): 128824 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333760 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1706424 kB' 'Mapped: 48812 kB' 'AnonPages: 119992 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66648 kB' 'Slab: 139992 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.317 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.317 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.318 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.318 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.318 06:56:31 -- setup/common.sh@33 -- # echo 0 00:03:47.318 06:56:31 -- setup/common.sh@33 -- # return 0 00:03:47.318 06:56:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.318 06:56:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.318 node0=512 expecting 512 00:03:47.318 ************************************ 00:03:47.318 END TEST per_node_1G_alloc 00:03:47.318 ************************************ 00:03:47.318 06:56:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.318 06:56:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.318 06:56:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.318 06:56:31 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:47.318 00:03:47.318 real 0m0.542s 00:03:47.318 user 0m0.277s 00:03:47.318 sys 0m0.296s 00:03:47.318 06:56:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.318 06:56:31 -- common/autotest_common.sh@10 -- # set +x 00:03:47.318 06:56:31 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:47.318 06:56:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:47.318 06:56:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:47.318 06:56:31 -- common/autotest_common.sh@10 -- # set +x 00:03:47.318 ************************************ 00:03:47.318 START TEST even_2G_alloc 00:03:47.318 ************************************ 00:03:47.318 06:56:31 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:47.318 06:56:31 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:47.318 06:56:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.318 06:56:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.318 06:56:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.318 06:56:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.318 06:56:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.318 06:56:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.318 06:56:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.318 06:56:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.318 06:56:31 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:47.318 06:56:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.318 06:56:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.318 06:56:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.318 06:56:31 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:47.318 06:56:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.318 06:56:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:47.318 06:56:31 -- setup/hugepages.sh@83 -- # : 0 00:03:47.318 06:56:31 -- setup/hugepages.sh@84 -- # : 0 00:03:47.318 06:56:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.318 06:56:31 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:47.318 06:56:31 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:47.318 06:56:31 -- setup/hugepages.sh@153 -- # setup output 00:03:47.318 06:56:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.318 06:56:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.839 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.839 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.839 06:56:31 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:47.839 06:56:31 -- setup/hugepages.sh@89 -- # local node 00:03:47.839 06:56:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.839 06:56:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.839 06:56:31 -- setup/hugepages.sh@92 -- # local surp 00:03:47.839 06:56:31 -- setup/hugepages.sh@93 -- # local resv 00:03:47.839 06:56:31 -- setup/hugepages.sh@94 -- # local anon 00:03:47.839 06:56:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.839 06:56:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.839 06:56:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.839 06:56:31 -- setup/common.sh@18 -- # local node= 00:03:47.839 06:56:31 -- setup/common.sh@19 -- # local var val 00:03:47.839 06:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.839 06:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.839 06:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.839 06:56:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.839 06:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.839 06:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8023288 kB' 'MemAvailable: 9515412 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 491180 kB' 'Inactive: 1333764 kB' 'Active(anon): 128980 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120156 kB' 'Mapped: 48972 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139952 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73304 kB' 'KernelStack: 6280 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.839 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.839 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.840 06:56:31 -- setup/common.sh@33 -- # echo 0 00:03:47.840 06:56:31 -- setup/common.sh@33 -- # return 0 00:03:47.840 06:56:31 -- setup/hugepages.sh@97 -- # anon=0 00:03:47.840 06:56:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.840 06:56:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.840 06:56:31 -- setup/common.sh@18 -- # local node= 00:03:47.840 06:56:31 -- setup/common.sh@19 -- # local var val 00:03:47.840 06:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.840 06:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.840 06:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.840 06:56:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.840 06:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.840 06:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8023036 kB' 'MemAvailable: 9515160 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 490984 kB' 'Inactive: 1333764 kB' 'Active(anon): 128784 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119896 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139952 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73304 kB' 'KernelStack: 6272 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.840 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.840 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.841 06:56:31 -- setup/common.sh@33 -- # echo 0 00:03:47.841 06:56:31 -- setup/common.sh@33 -- # return 0 00:03:47.841 06:56:31 -- setup/hugepages.sh@99 -- # surp=0 00:03:47.841 06:56:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.841 06:56:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.841 06:56:31 -- setup/common.sh@18 -- # local node= 00:03:47.841 06:56:31 -- setup/common.sh@19 -- # local var val 00:03:47.841 06:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.841 06:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.841 06:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.841 06:56:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.841 06:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.841 06:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8023036 kB' 'MemAvailable: 9515160 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 490924 kB' 'Inactive: 1333764 kB' 'Active(anon): 128724 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119888 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139952 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73304 kB' 'KernelStack: 6272 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.841 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.841 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.842 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.842 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.842 06:56:31 -- setup/common.sh@33 -- # echo 0 00:03:47.842 06:56:31 -- setup/common.sh@33 -- # return 0 00:03:47.842 nr_hugepages=1024 00:03:47.842 resv_hugepages=0 00:03:47.842 06:56:31 -- setup/hugepages.sh@100 -- # resv=0 00:03:47.842 06:56:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.842 06:56:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.842 surplus_hugepages=0 00:03:47.842 06:56:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.842 anon_hugepages=0 00:03:47.842 06:56:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.842 06:56:31 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.842 06:56:31 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.842 06:56:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.842 06:56:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.842 06:56:31 -- setup/common.sh@18 -- # local node= 00:03:47.842 06:56:31 -- setup/common.sh@19 -- # local var val 00:03:47.842 06:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.842 06:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.842 06:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.842 06:56:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.843 06:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.843 06:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8023036 kB' 'MemAvailable: 9515160 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 490940 kB' 'Inactive: 1333764 kB' 'Active(anon): 128740 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119896 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139952 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73304 kB' 'KernelStack: 6272 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.843 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.843 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.844 06:56:31 -- setup/common.sh@33 -- # echo 1024 00:03:47.844 06:56:31 -- setup/common.sh@33 -- # return 0 00:03:47.844 06:56:31 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.844 06:56:31 -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.844 06:56:31 -- setup/hugepages.sh@27 -- # local node 00:03:47.844 06:56:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.844 06:56:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.844 06:56:31 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.844 06:56:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.844 06:56:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.844 06:56:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.844 06:56:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.844 06:56:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.844 06:56:31 -- setup/common.sh@18 -- # local node=0 00:03:47.844 06:56:31 -- setup/common.sh@19 -- # local var val 00:03:47.844 06:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.844 06:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.844 06:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.844 06:56:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.844 06:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.844 06:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8023036 kB' 'MemUsed: 4218936 kB' 'SwapCached: 0 kB' 'Active: 490968 kB' 'Inactive: 1333764 kB' 'Active(anon): 128768 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1706428 kB' 'Mapped: 48812 kB' 'AnonPages: 119888 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66648 kB' 'Slab: 139948 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.844 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.844 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # continue 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.845 06:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.845 06:56:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.845 06:56:31 -- setup/common.sh@33 -- # echo 0 00:03:47.845 06:56:31 -- setup/common.sh@33 -- # return 0 00:03:47.845 06:56:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.845 06:56:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.845 06:56:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.845 node0=1024 expecting 1024 00:03:47.845 ************************************ 00:03:47.845 END TEST even_2G_alloc 00:03:47.845 ************************************ 00:03:47.845 06:56:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.845 06:56:31 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.845 06:56:31 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.845 00:03:47.845 real 0m0.603s 00:03:47.845 user 0m0.281s 00:03:47.845 sys 0m0.313s 00:03:47.845 06:56:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.845 06:56:31 -- common/autotest_common.sh@10 -- # set +x 00:03:48.104 06:56:31 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:48.104 06:56:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:48.104 06:56:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:48.104 06:56:31 -- common/autotest_common.sh@10 -- # set +x 00:03:48.104 ************************************ 00:03:48.104 START TEST odd_alloc 00:03:48.104 ************************************ 00:03:48.104 06:56:31 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:48.104 06:56:31 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:48.104 06:56:31 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:48.104 06:56:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.104 06:56:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.104 06:56:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:48.104 06:56:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.104 06:56:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.104 06:56:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.104 06:56:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:48.104 06:56:31 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:48.104 06:56:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.104 06:56:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.104 06:56:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.104 06:56:31 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.104 06:56:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.104 06:56:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:48.104 06:56:31 -- setup/hugepages.sh@83 -- # : 0 00:03:48.104 06:56:31 -- setup/hugepages.sh@84 -- # : 0 00:03:48.104 06:56:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.104 06:56:31 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:48.104 06:56:31 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:48.104 06:56:31 -- setup/hugepages.sh@160 -- # setup output 00:03:48.104 06:56:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.104 06:56:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:48.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.364 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.364 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.364 06:56:32 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:48.364 06:56:32 -- setup/hugepages.sh@89 -- # local node 00:03:48.364 06:56:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.364 06:56:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.364 06:56:32 -- setup/hugepages.sh@92 -- # local surp 00:03:48.364 06:56:32 -- setup/hugepages.sh@93 -- # local resv 00:03:48.364 06:56:32 -- setup/hugepages.sh@94 -- # local anon 00:03:48.364 06:56:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.364 06:56:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.364 06:56:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.364 06:56:32 -- setup/common.sh@18 -- # local node= 00:03:48.364 06:56:32 -- setup/common.sh@19 -- # local var val 00:03:48.364 06:56:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.364 06:56:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.364 06:56:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.364 06:56:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.364 06:56:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.364 06:56:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.364 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.364 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.364 06:56:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8021016 kB' 'MemAvailable: 9513140 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 491092 kB' 'Inactive: 1333764 kB' 'Active(anon): 128892 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120300 kB' 'Mapped: 48928 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139932 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73284 kB' 'KernelStack: 6248 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:48.364 06:56:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.364 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.364 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.364 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.365 06:56:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.365 06:56:32 -- setup/common.sh@33 -- # echo 0 00:03:48.365 06:56:32 -- setup/common.sh@33 -- # return 0 00:03:48.365 06:56:32 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.365 06:56:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.365 06:56:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.365 06:56:32 -- setup/common.sh@18 -- # local node= 00:03:48.365 06:56:32 -- setup/common.sh@19 -- # local var val 00:03:48.365 06:56:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.365 06:56:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.365 06:56:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.365 06:56:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.365 06:56:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.365 06:56:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.365 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8020764 kB' 'MemAvailable: 9512888 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 490624 kB' 'Inactive: 1333764 kB' 'Active(anon): 128424 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119856 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139936 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73288 kB' 'KernelStack: 6304 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.366 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.366 06:56:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.367 06:56:32 -- setup/common.sh@33 -- # echo 0 00:03:48.367 06:56:32 -- setup/common.sh@33 -- # return 0 00:03:48.367 06:56:32 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.367 06:56:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.367 06:56:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.367 06:56:32 -- setup/common.sh@18 -- # local node= 00:03:48.367 06:56:32 -- setup/common.sh@19 -- # local var val 00:03:48.367 06:56:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.367 06:56:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.367 06:56:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.367 06:56:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.367 06:56:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.367 06:56:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8020764 kB' 'MemAvailable: 9512888 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 490664 kB' 'Inactive: 1333764 kB' 'Active(anon): 128464 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119868 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139936 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73288 kB' 'KernelStack: 6272 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.367 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.367 06:56:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.368 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.368 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.628 06:56:32 -- setup/common.sh@33 -- # echo 0 00:03:48.628 06:56:32 -- setup/common.sh@33 -- # return 0 00:03:48.628 nr_hugepages=1025 00:03:48.628 resv_hugepages=0 00:03:48.628 surplus_hugepages=0 00:03:48.628 anon_hugepages=0 00:03:48.628 06:56:32 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.628 06:56:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:48.628 06:56:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.628 06:56:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.628 06:56:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.628 06:56:32 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.628 06:56:32 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:48.628 06:56:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.628 06:56:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.628 06:56:32 -- setup/common.sh@18 -- # local node= 00:03:48.628 06:56:32 -- setup/common.sh@19 -- # local var val 00:03:48.628 06:56:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.628 06:56:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.628 06:56:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.628 06:56:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.628 06:56:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.628 06:56:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 06:56:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8020764 kB' 'MemAvailable: 9512888 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 490800 kB' 'Inactive: 1333764 kB' 'Active(anon): 128600 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119980 kB' 'Mapped: 48892 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139932 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73284 kB' 'KernelStack: 6240 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 348524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.628 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.629 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.629 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.629 06:56:32 -- setup/common.sh@33 -- # echo 1025 00:03:48.629 06:56:32 -- setup/common.sh@33 -- # return 0 00:03:48.629 06:56:32 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.629 06:56:32 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.629 06:56:32 -- setup/hugepages.sh@27 -- # local node 00:03:48.630 06:56:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.630 06:56:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:48.630 06:56:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:48.630 06:56:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.630 06:56:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.630 06:56:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.630 06:56:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.630 06:56:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.630 06:56:32 -- setup/common.sh@18 -- # local node=0 00:03:48.630 06:56:32 -- setup/common.sh@19 -- # local var val 00:03:48.630 06:56:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.630 06:56:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.630 06:56:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.630 06:56:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.630 06:56:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.630 06:56:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8020764 kB' 'MemUsed: 4221208 kB' 'SwapCached: 0 kB' 'Active: 490652 kB' 'Inactive: 1333764 kB' 'Active(anon): 128452 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1706428 kB' 'Mapped: 48812 kB' 'AnonPages: 119628 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66648 kB' 'Slab: 139916 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.630 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.630 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.631 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.631 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.631 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.631 06:56:32 -- setup/common.sh@33 -- # echo 0 00:03:48.631 06:56:32 -- setup/common.sh@33 -- # return 0 00:03:48.631 06:56:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.631 06:56:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.631 06:56:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.631 06:56:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.631 06:56:32 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:48.631 node0=1025 expecting 1025 00:03:48.631 ************************************ 00:03:48.631 END TEST odd_alloc 00:03:48.631 ************************************ 00:03:48.631 06:56:32 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:48.631 00:03:48.631 real 0m0.557s 00:03:48.631 user 0m0.275s 00:03:48.631 sys 0m0.297s 00:03:48.631 06:56:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.631 06:56:32 -- common/autotest_common.sh@10 -- # set +x 00:03:48.631 06:56:32 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:48.631 06:56:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:48.631 06:56:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:48.631 06:56:32 -- common/autotest_common.sh@10 -- # set +x 00:03:48.631 ************************************ 00:03:48.631 START TEST custom_alloc 00:03:48.631 ************************************ 00:03:48.631 06:56:32 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:48.631 06:56:32 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:48.631 06:56:32 -- setup/hugepages.sh@169 -- # local node 00:03:48.631 06:56:32 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:48.631 06:56:32 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:48.631 06:56:32 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:48.631 06:56:32 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:48.631 06:56:32 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:48.631 06:56:32 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.631 06:56:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.631 06:56:32 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:48.631 06:56:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.631 06:56:32 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.631 06:56:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.631 06:56:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.631 06:56:32 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:48.631 06:56:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.631 06:56:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.631 06:56:32 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.631 06:56:32 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.631 06:56:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.631 06:56:32 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.631 06:56:32 -- setup/hugepages.sh@83 -- # : 0 00:03:48.631 06:56:32 -- setup/hugepages.sh@84 -- # : 0 00:03:48.631 06:56:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.631 06:56:32 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:48.631 06:56:32 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:48.631 06:56:32 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:48.631 06:56:32 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:48.631 06:56:32 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:48.631 06:56:32 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:48.631 06:56:32 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.631 06:56:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.631 06:56:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.631 06:56:32 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:48.631 06:56:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.631 06:56:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.631 06:56:32 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.631 06:56:32 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:48.631 06:56:32 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.631 06:56:32 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:48.631 06:56:32 -- setup/hugepages.sh@78 -- # return 0 00:03:48.631 06:56:32 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:48.631 06:56:32 -- setup/hugepages.sh@187 -- # setup output 00:03:48.631 06:56:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.631 06:56:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:48.890 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.890 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.890 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.890 06:56:32 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:48.890 06:56:32 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:48.890 06:56:32 -- setup/hugepages.sh@89 -- # local node 00:03:48.890 06:56:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.890 06:56:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.890 06:56:32 -- setup/hugepages.sh@92 -- # local surp 00:03:48.890 06:56:32 -- setup/hugepages.sh@93 -- # local resv 00:03:48.890 06:56:32 -- setup/hugepages.sh@94 -- # local anon 00:03:48.890 06:56:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.890 06:56:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.890 06:56:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.890 06:56:32 -- setup/common.sh@18 -- # local node= 00:03:48.890 06:56:32 -- setup/common.sh@19 -- # local var val 00:03:48.890 06:56:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.890 06:56:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.890 06:56:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.890 06:56:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.890 06:56:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.890 06:56:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.890 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 06:56:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9070832 kB' 'MemAvailable: 10562956 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 491472 kB' 'Inactive: 1333764 kB' 'Active(anon): 129272 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120744 kB' 'Mapped: 49200 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139932 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73284 kB' 'KernelStack: 6320 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:48.890 06:56:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.890 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.890 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 06:56:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.890 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.890 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 06:56:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.890 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.890 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.891 06:56:32 -- setup/common.sh@32 -- # continue 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.155 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.155 06:56:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.156 06:56:32 -- setup/common.sh@33 -- # echo 0 00:03:49.156 06:56:32 -- setup/common.sh@33 -- # return 0 00:03:49.156 06:56:32 -- setup/hugepages.sh@97 -- # anon=0 00:03:49.156 06:56:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.156 06:56:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.156 06:56:32 -- setup/common.sh@18 -- # local node= 00:03:49.156 06:56:32 -- setup/common.sh@19 -- # local var val 00:03:49.156 06:56:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.156 06:56:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.156 06:56:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.156 06:56:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.156 06:56:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.156 06:56:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9070832 kB' 'MemAvailable: 10562956 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 491040 kB' 'Inactive: 1333764 kB' 'Active(anon): 128840 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120004 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139944 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73296 kB' 'KernelStack: 6288 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.156 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.156 06:56:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.157 06:56:32 -- setup/common.sh@33 -- # echo 0 00:03:49.157 06:56:32 -- setup/common.sh@33 -- # return 0 00:03:49.157 06:56:32 -- setup/hugepages.sh@99 -- # surp=0 00:03:49.157 06:56:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.157 06:56:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.157 06:56:32 -- setup/common.sh@18 -- # local node= 00:03:49.157 06:56:32 -- setup/common.sh@19 -- # local var val 00:03:49.157 06:56:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.157 06:56:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.157 06:56:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.157 06:56:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.157 06:56:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.157 06:56:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.157 06:56:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9070832 kB' 'MemAvailable: 10562956 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 491000 kB' 'Inactive: 1333764 kB' 'Active(anon): 128800 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119968 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139940 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73292 kB' 'KernelStack: 6288 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.157 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.157 06:56:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:32 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.158 06:56:33 -- setup/common.sh@33 -- # echo 0 00:03:49.158 06:56:33 -- setup/common.sh@33 -- # return 0 00:03:49.158 06:56:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:49.158 nr_hugepages=512 00:03:49.158 06:56:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:49.158 resv_hugepages=0 00:03:49.158 06:56:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.158 surplus_hugepages=0 00:03:49.158 06:56:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.158 anon_hugepages=0 00:03:49.158 06:56:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.158 06:56:33 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:49.158 06:56:33 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:49.158 06:56:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.158 06:56:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.158 06:56:33 -- setup/common.sh@18 -- # local node= 00:03:49.158 06:56:33 -- setup/common.sh@19 -- # local var val 00:03:49.158 06:56:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.158 06:56:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.158 06:56:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.158 06:56:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.158 06:56:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.158 06:56:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.158 06:56:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9070832 kB' 'MemAvailable: 10562956 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 490940 kB' 'Inactive: 1333764 kB' 'Active(anon): 128740 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119896 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139940 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73292 kB' 'KernelStack: 6240 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.158 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.158 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.159 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.159 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.160 06:56:33 -- setup/common.sh@33 -- # echo 512 00:03:49.160 06:56:33 -- setup/common.sh@33 -- # return 0 00:03:49.160 06:56:33 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:49.160 06:56:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.160 06:56:33 -- setup/hugepages.sh@27 -- # local node 00:03:49.160 06:56:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.160 06:56:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:49.160 06:56:33 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:49.160 06:56:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.160 06:56:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.160 06:56:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.160 06:56:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.160 06:56:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.160 06:56:33 -- setup/common.sh@18 -- # local node=0 00:03:49.160 06:56:33 -- setup/common.sh@19 -- # local var val 00:03:49.160 06:56:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.160 06:56:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.160 06:56:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.160 06:56:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.160 06:56:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.160 06:56:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9070832 kB' 'MemUsed: 3171140 kB' 'SwapCached: 0 kB' 'Active: 490572 kB' 'Inactive: 1333764 kB' 'Active(anon): 128372 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1706428 kB' 'Mapped: 48812 kB' 'AnonPages: 119580 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66648 kB' 'Slab: 139940 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.160 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.160 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.161 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.161 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.161 06:56:33 -- setup/common.sh@33 -- # echo 0 00:03:49.161 06:56:33 -- setup/common.sh@33 -- # return 0 00:03:49.161 06:56:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.161 06:56:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.161 06:56:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.161 06:56:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.161 node0=512 expecting 512 00:03:49.161 06:56:33 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:49.161 06:56:33 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:49.161 00:03:49.161 real 0m0.527s 00:03:49.161 user 0m0.274s 00:03:49.161 sys 0m0.285s 00:03:49.161 06:56:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.161 06:56:33 -- common/autotest_common.sh@10 -- # set +x 00:03:49.161 ************************************ 00:03:49.161 END TEST custom_alloc 00:03:49.161 ************************************ 00:03:49.161 06:56:33 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:49.161 06:56:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:49.161 06:56:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:49.161 06:56:33 -- common/autotest_common.sh@10 -- # set +x 00:03:49.161 ************************************ 00:03:49.161 START TEST no_shrink_alloc 00:03:49.161 ************************************ 00:03:49.161 06:56:33 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:49.161 06:56:33 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:49.161 06:56:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:49.161 06:56:33 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:49.161 06:56:33 -- setup/hugepages.sh@51 -- # shift 00:03:49.161 06:56:33 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:49.161 06:56:33 -- setup/hugepages.sh@52 -- # local node_ids 00:03:49.161 06:56:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.161 06:56:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:49.161 06:56:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:49.161 06:56:33 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:49.161 06:56:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.161 06:56:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:49.161 06:56:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:49.161 06:56:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.161 06:56:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.161 06:56:33 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:49.161 06:56:33 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:49.161 06:56:33 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:49.161 06:56:33 -- setup/hugepages.sh@73 -- # return 0 00:03:49.161 06:56:33 -- setup/hugepages.sh@198 -- # setup output 00:03:49.161 06:56:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.161 06:56:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:49.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.419 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.419 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.690 06:56:33 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:49.690 06:56:33 -- setup/hugepages.sh@89 -- # local node 00:03:49.690 06:56:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.690 06:56:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.690 06:56:33 -- setup/hugepages.sh@92 -- # local surp 00:03:49.690 06:56:33 -- setup/hugepages.sh@93 -- # local resv 00:03:49.690 06:56:33 -- setup/hugepages.sh@94 -- # local anon 00:03:49.690 06:56:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.690 06:56:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.690 06:56:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.690 06:56:33 -- setup/common.sh@18 -- # local node= 00:03:49.690 06:56:33 -- setup/common.sh@19 -- # local var val 00:03:49.690 06:56:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.690 06:56:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.690 06:56:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.690 06:56:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.690 06:56:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.690 06:56:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8016732 kB' 'MemAvailable: 9508856 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 491352 kB' 'Inactive: 1333764 kB' 'Active(anon): 129152 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120612 kB' 'Mapped: 48988 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139940 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73292 kB' 'KernelStack: 6324 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.690 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.690 06:56:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.691 06:56:33 -- setup/common.sh@33 -- # echo 0 00:03:49.691 06:56:33 -- setup/common.sh@33 -- # return 0 00:03:49.691 06:56:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:49.691 06:56:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.691 06:56:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.691 06:56:33 -- setup/common.sh@18 -- # local node= 00:03:49.691 06:56:33 -- setup/common.sh@19 -- # local var val 00:03:49.691 06:56:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.691 06:56:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.691 06:56:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.691 06:56:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.691 06:56:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.691 06:56:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8016732 kB' 'MemAvailable: 9508856 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 490868 kB' 'Inactive: 1333764 kB' 'Active(anon): 128668 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119836 kB' 'Mapped: 48988 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139940 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73292 kB' 'KernelStack: 6264 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.691 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.691 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.692 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.692 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.693 06:56:33 -- setup/common.sh@33 -- # echo 0 00:03:49.693 06:56:33 -- setup/common.sh@33 -- # return 0 00:03:49.693 06:56:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:49.693 06:56:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.693 06:56:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.693 06:56:33 -- setup/common.sh@18 -- # local node= 00:03:49.693 06:56:33 -- setup/common.sh@19 -- # local var val 00:03:49.693 06:56:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.693 06:56:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.693 06:56:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.693 06:56:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.693 06:56:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.693 06:56:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8016732 kB' 'MemAvailable: 9508856 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 490740 kB' 'Inactive: 1333764 kB' 'Active(anon): 128540 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119936 kB' 'Mapped: 48872 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139940 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73292 kB' 'KernelStack: 6272 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.693 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.693 06:56:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.694 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.694 06:56:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.695 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.695 06:56:33 -- setup/common.sh@33 -- # echo 0 00:03:49.695 06:56:33 -- setup/common.sh@33 -- # return 0 00:03:49.695 06:56:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:49.695 nr_hugepages=1024 00:03:49.695 06:56:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.695 resv_hugepages=0 00:03:49.695 06:56:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.695 surplus_hugepages=0 00:03:49.695 06:56:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.695 06:56:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.695 anon_hugepages=0 00:03:49.695 06:56:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.695 06:56:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.695 06:56:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.695 06:56:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.695 06:56:33 -- setup/common.sh@18 -- # local node= 00:03:49.695 06:56:33 -- setup/common.sh@19 -- # local var val 00:03:49.695 06:56:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.695 06:56:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.695 06:56:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.695 06:56:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.695 06:56:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.695 06:56:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.695 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.696 06:56:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8016480 kB' 'MemAvailable: 9508604 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 490708 kB' 'Inactive: 1333764 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119920 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139940 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73292 kB' 'KernelStack: 6272 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 348892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.696 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.696 06:56:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.700 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.701 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.701 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.702 06:56:33 -- setup/common.sh@33 -- # echo 1024 00:03:49.702 06:56:33 -- setup/common.sh@33 -- # return 0 00:03:49.702 06:56:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.702 06:56:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.702 06:56:33 -- setup/hugepages.sh@27 -- # local node 00:03:49.702 06:56:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.702 06:56:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.702 06:56:33 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:49.702 06:56:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.702 06:56:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.702 06:56:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.702 06:56:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.702 06:56:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.702 06:56:33 -- setup/common.sh@18 -- # local node=0 00:03:49.702 06:56:33 -- setup/common.sh@19 -- # local var val 00:03:49.702 06:56:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.702 06:56:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.702 06:56:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.702 06:56:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.702 06:56:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.702 06:56:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8016480 kB' 'MemUsed: 4225492 kB' 'SwapCached: 0 kB' 'Active: 490640 kB' 'Inactive: 1333764 kB' 'Active(anon): 128440 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1706428 kB' 'Mapped: 48812 kB' 'AnonPages: 119844 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66648 kB' 'Slab: 139940 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.702 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.702 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # continue 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.703 06:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.703 06:56:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.703 06:56:33 -- setup/common.sh@33 -- # echo 0 00:03:49.703 06:56:33 -- setup/common.sh@33 -- # return 0 00:03:49.703 06:56:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.703 06:56:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.703 06:56:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.704 06:56:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.704 node0=1024 expecting 1024 00:03:49.704 06:56:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:49.704 06:56:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:49.704 06:56:33 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:49.704 06:56:33 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:49.704 06:56:33 -- setup/hugepages.sh@202 -- # setup output 00:03:49.704 06:56:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.704 06:56:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:49.964 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.964 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.964 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.964 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:49.964 06:56:34 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:49.964 06:56:34 -- setup/hugepages.sh@89 -- # local node 00:03:49.964 06:56:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.964 06:56:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.964 06:56:34 -- setup/hugepages.sh@92 -- # local surp 00:03:49.964 06:56:34 -- setup/hugepages.sh@93 -- # local resv 00:03:49.964 06:56:34 -- setup/hugepages.sh@94 -- # local anon 00:03:49.964 06:56:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.234 06:56:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.234 06:56:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.234 06:56:34 -- setup/common.sh@18 -- # local node= 00:03:50.234 06:56:34 -- setup/common.sh@19 -- # local var val 00:03:50.234 06:56:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.234 06:56:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.234 06:56:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.234 06:56:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.234 06:56:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.234 06:56:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.234 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.234 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.234 06:56:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8014960 kB' 'MemAvailable: 9507084 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 487160 kB' 'Inactive: 1333764 kB' 'Active(anon): 124960 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116448 kB' 'Mapped: 48444 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139856 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73208 kB' 'KernelStack: 6196 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:50.234 06:56:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.234 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.234 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.234 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.234 06:56:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.234 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.234 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.234 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.234 06:56:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.234 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.234 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.234 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.234 06:56:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.234 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.234 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.234 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.234 06:56:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.234 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.235 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.235 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.236 06:56:34 -- setup/common.sh@33 -- # echo 0 00:03:50.236 06:56:34 -- setup/common.sh@33 -- # return 0 00:03:50.236 06:56:34 -- setup/hugepages.sh@97 -- # anon=0 00:03:50.236 06:56:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.236 06:56:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.236 06:56:34 -- setup/common.sh@18 -- # local node= 00:03:50.236 06:56:34 -- setup/common.sh@19 -- # local var val 00:03:50.236 06:56:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.236 06:56:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.236 06:56:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.236 06:56:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.236 06:56:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.236 06:56:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8015480 kB' 'MemAvailable: 9507604 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 486912 kB' 'Inactive: 1333764 kB' 'Active(anon): 124712 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115872 kB' 'Mapped: 48072 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139832 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73184 kB' 'KernelStack: 6192 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.236 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.236 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.237 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.237 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.238 06:56:34 -- setup/common.sh@33 -- # echo 0 00:03:50.238 06:56:34 -- setup/common.sh@33 -- # return 0 00:03:50.238 06:56:34 -- setup/hugepages.sh@99 -- # surp=0 00:03:50.238 06:56:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.238 06:56:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.238 06:56:34 -- setup/common.sh@18 -- # local node= 00:03:50.238 06:56:34 -- setup/common.sh@19 -- # local var val 00:03:50.238 06:56:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.238 06:56:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.238 06:56:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.238 06:56:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.238 06:56:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.238 06:56:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8015544 kB' 'MemAvailable: 9507668 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 486668 kB' 'Inactive: 1333764 kB' 'Active(anon): 124468 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115908 kB' 'Mapped: 48072 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139832 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73184 kB' 'KernelStack: 6192 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.238 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.238 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.239 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.239 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.240 06:56:34 -- setup/common.sh@33 -- # echo 0 00:03:50.240 06:56:34 -- setup/common.sh@33 -- # return 0 00:03:50.240 06:56:34 -- setup/hugepages.sh@100 -- # resv=0 00:03:50.240 nr_hugepages=1024 00:03:50.240 06:56:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.240 resv_hugepages=0 00:03:50.240 06:56:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.240 surplus_hugepages=0 00:03:50.240 06:56:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.240 anon_hugepages=0 00:03:50.240 06:56:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.240 06:56:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.240 06:56:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.240 06:56:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.240 06:56:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.240 06:56:34 -- setup/common.sh@18 -- # local node= 00:03:50.240 06:56:34 -- setup/common.sh@19 -- # local var val 00:03:50.240 06:56:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.240 06:56:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.240 06:56:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.240 06:56:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.240 06:56:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.240 06:56:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8015544 kB' 'MemAvailable: 9507668 kB' 'Buffers: 2436 kB' 'Cached: 1703992 kB' 'SwapCached: 0 kB' 'Active: 486564 kB' 'Inactive: 1333764 kB' 'Active(anon): 124364 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115508 kB' 'Mapped: 48072 kB' 'Shmem: 10464 kB' 'KReclaimable: 66648 kB' 'Slab: 139820 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73172 kB' 'KernelStack: 6144 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 4012032 kB' 'DirectMap1G: 10485760 kB' 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.240 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.240 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.241 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.241 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.242 06:56:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.242 06:56:34 -- setup/common.sh@33 -- # echo 1024 00:03:50.242 06:56:34 -- setup/common.sh@33 -- # return 0 00:03:50.242 06:56:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.242 06:56:34 -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.242 06:56:34 -- setup/hugepages.sh@27 -- # local node 00:03:50.242 06:56:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.242 06:56:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.242 06:56:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:50.242 06:56:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.242 06:56:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.242 06:56:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.242 06:56:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.242 06:56:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.242 06:56:34 -- setup/common.sh@18 -- # local node=0 00:03:50.242 06:56:34 -- setup/common.sh@19 -- # local var val 00:03:50.242 06:56:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.242 06:56:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.242 06:56:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.242 06:56:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.242 06:56:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.242 06:56:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.242 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8023600 kB' 'MemUsed: 4218372 kB' 'SwapCached: 0 kB' 'Active: 486520 kB' 'Inactive: 1333764 kB' 'Active(anon): 124320 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1333764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1706428 kB' 'Mapped: 48072 kB' 'AnonPages: 115724 kB' 'Shmem: 10464 kB' 'KernelStack: 6128 kB' 'PageTables: 3684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66648 kB' 'Slab: 139820 kB' 'SReclaimable: 66648 kB' 'SUnreclaim: 73172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.243 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # continue 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 06:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 06:56:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.244 06:56:34 -- setup/common.sh@33 -- # echo 0 00:03:50.244 06:56:34 -- setup/common.sh@33 -- # return 0 00:03:50.244 06:56:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.244 06:56:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.244 06:56:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.244 06:56:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.244 node0=1024 expecting 1024 00:03:50.244 06:56:34 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:50.244 06:56:34 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.244 00:03:50.244 real 0m1.039s 00:03:50.244 user 0m0.514s 00:03:50.244 sys 0m0.589s 00:03:50.244 06:56:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.244 06:56:34 -- common/autotest_common.sh@10 -- # set +x 00:03:50.244 ************************************ 00:03:50.244 END TEST no_shrink_alloc 00:03:50.244 ************************************ 00:03:50.244 06:56:34 -- setup/hugepages.sh@217 -- # clear_hp 00:03:50.244 06:56:34 -- setup/hugepages.sh@37 -- # local node hp 00:03:50.244 06:56:34 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:50.244 06:56:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:50.244 06:56:34 -- setup/hugepages.sh@41 -- # echo 0 00:03:50.244 06:56:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:50.244 06:56:34 -- setup/hugepages.sh@41 -- # echo 0 00:03:50.244 06:56:34 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:50.244 06:56:34 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:50.244 00:03:50.244 real 0m4.709s 00:03:50.244 user 0m2.264s 00:03:50.244 sys 0m2.511s 00:03:50.244 06:56:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.244 ************************************ 00:03:50.244 END TEST hugepages 00:03:50.244 06:56:34 -- common/autotest_common.sh@10 -- # set +x 00:03:50.244 ************************************ 00:03:50.244 06:56:34 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:50.244 06:56:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:50.244 06:56:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:50.244 06:56:34 -- common/autotest_common.sh@10 -- # set +x 00:03:50.244 ************************************ 00:03:50.244 START TEST driver 00:03:50.244 ************************************ 00:03:50.244 06:56:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:50.534 * Looking for test storage... 00:03:50.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:50.534 06:56:34 -- setup/driver.sh@68 -- # setup reset 00:03:50.534 06:56:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.534 06:56:34 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.102 06:56:34 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:51.102 06:56:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:51.102 06:56:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.102 06:56:34 -- common/autotest_common.sh@10 -- # set +x 00:03:51.102 ************************************ 00:03:51.102 START TEST guess_driver 00:03:51.102 ************************************ 00:03:51.102 06:56:34 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:51.102 06:56:34 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:51.102 06:56:34 -- setup/driver.sh@47 -- # local fail=0 00:03:51.102 06:56:34 -- setup/driver.sh@49 -- # pick_driver 00:03:51.102 06:56:34 -- setup/driver.sh@36 -- # vfio 00:03:51.102 06:56:34 -- setup/driver.sh@21 -- # local iommu_grups 00:03:51.102 06:56:34 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:51.102 06:56:34 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:51.102 06:56:34 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:51.102 06:56:34 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:51.102 06:56:34 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:51.102 06:56:34 -- setup/driver.sh@32 -- # return 1 00:03:51.102 06:56:34 -- setup/driver.sh@38 -- # uio 00:03:51.102 06:56:34 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:51.102 06:56:34 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:51.102 06:56:34 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:51.102 06:56:34 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:51.102 06:56:34 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:51.102 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:51.102 06:56:34 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:51.102 06:56:34 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:51.102 06:56:34 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:51.102 Looking for driver=uio_pci_generic 00:03:51.102 06:56:34 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:51.102 06:56:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.102 06:56:34 -- setup/driver.sh@45 -- # setup output config 00:03:51.102 06:56:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.102 06:56:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:51.669 06:56:35 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:51.669 06:56:35 -- setup/driver.sh@58 -- # continue 00:03:51.669 06:56:35 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.669 06:56:35 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:51.669 06:56:35 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:51.669 06:56:35 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.928 06:56:35 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:51.928 06:56:35 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:51.928 06:56:35 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.928 06:56:35 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:51.928 06:56:35 -- setup/driver.sh@65 -- # setup reset 00:03:51.928 06:56:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.928 06:56:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:52.496 00:03:52.496 real 0m1.438s 00:03:52.496 user 0m0.534s 00:03:52.496 sys 0m0.885s 00:03:52.496 06:56:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.496 ************************************ 00:03:52.496 END TEST guess_driver 00:03:52.496 ************************************ 00:03:52.496 06:56:36 -- common/autotest_common.sh@10 -- # set +x 00:03:52.497 ************************************ 00:03:52.497 END TEST driver 00:03:52.497 ************************************ 00:03:52.497 00:03:52.497 real 0m2.140s 00:03:52.497 user 0m0.787s 00:03:52.497 sys 0m1.375s 00:03:52.497 06:56:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.497 06:56:36 -- common/autotest_common.sh@10 -- # set +x 00:03:52.497 06:56:36 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:52.497 06:56:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:52.497 06:56:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:52.497 06:56:36 -- common/autotest_common.sh@10 -- # set +x 00:03:52.497 ************************************ 00:03:52.497 START TEST devices 00:03:52.497 ************************************ 00:03:52.497 06:56:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:52.497 * Looking for test storage... 00:03:52.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:52.497 06:56:36 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:52.497 06:56:36 -- setup/devices.sh@192 -- # setup reset 00:03:52.497 06:56:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.497 06:56:36 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.433 06:56:37 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:53.433 06:56:37 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:53.433 06:56:37 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:53.433 06:56:37 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:53.433 06:56:37 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:53.433 06:56:37 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:53.433 06:56:37 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:53.433 06:56:37 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.433 06:56:37 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:53.433 06:56:37 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:53.433 06:56:37 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:53.433 06:56:37 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:53.433 06:56:37 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:53.433 06:56:37 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:53.433 06:56:37 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:53.433 06:56:37 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:03:53.433 06:56:37 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:03:53.433 06:56:37 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:53.433 06:56:37 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:53.433 06:56:37 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:53.433 06:56:37 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:03:53.433 06:56:37 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:03:53.433 06:56:37 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:53.433 06:56:37 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:53.433 06:56:37 -- setup/devices.sh@196 -- # blocks=() 00:03:53.433 06:56:37 -- setup/devices.sh@196 -- # declare -a blocks 00:03:53.433 06:56:37 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:53.433 06:56:37 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:53.433 06:56:37 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:53.433 06:56:37 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.433 06:56:37 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:53.433 06:56:37 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:53.433 06:56:37 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:03:53.433 06:56:37 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:53.433 06:56:37 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:53.433 06:56:37 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:53.433 06:56:37 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:53.433 No valid GPT data, bailing 00:03:53.433 06:56:37 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.433 06:56:37 -- scripts/common.sh@393 -- # pt= 00:03:53.433 06:56:37 -- scripts/common.sh@394 -- # return 1 00:03:53.433 06:56:37 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:53.433 06:56:37 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:53.433 06:56:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:53.433 06:56:37 -- setup/common.sh@80 -- # echo 5368709120 00:03:53.433 06:56:37 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:53.433 06:56:37 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.433 06:56:37 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:03:53.433 06:56:37 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.433 06:56:37 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:53.433 06:56:37 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:53.433 06:56:37 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:53.433 06:56:37 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:53.433 06:56:37 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:53.433 06:56:37 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:53.433 06:56:37 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:53.433 No valid GPT data, bailing 00:03:53.433 06:56:37 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:53.433 06:56:37 -- scripts/common.sh@393 -- # pt= 00:03:53.433 06:56:37 -- scripts/common.sh@394 -- # return 1 00:03:53.433 06:56:37 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:53.433 06:56:37 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:53.433 06:56:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:53.433 06:56:37 -- setup/common.sh@80 -- # echo 4294967296 00:03:53.433 06:56:37 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:53.433 06:56:37 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.433 06:56:37 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:53.433 06:56:37 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.433 06:56:37 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:53.433 06:56:37 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:53.433 06:56:37 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:53.433 06:56:37 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:53.433 06:56:37 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:03:53.433 06:56:37 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:03:53.433 06:56:37 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:03:53.692 No valid GPT data, bailing 00:03:53.692 06:56:37 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:53.692 06:56:37 -- scripts/common.sh@393 -- # pt= 00:03:53.692 06:56:37 -- scripts/common.sh@394 -- # return 1 00:03:53.692 06:56:37 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:03:53.692 06:56:37 -- setup/common.sh@76 -- # local dev=nvme1n2 00:03:53.692 06:56:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:03:53.692 06:56:37 -- setup/common.sh@80 -- # echo 4294967296 00:03:53.692 06:56:37 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:53.692 06:56:37 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.692 06:56:37 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:53.692 06:56:37 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.692 06:56:37 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:03:53.692 06:56:37 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:53.692 06:56:37 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:53.692 06:56:37 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:53.692 06:56:37 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:03:53.692 06:56:37 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:03:53.692 06:56:37 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:03:53.692 No valid GPT data, bailing 00:03:53.692 06:56:37 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:53.692 06:56:37 -- scripts/common.sh@393 -- # pt= 00:03:53.692 06:56:37 -- scripts/common.sh@394 -- # return 1 00:03:53.692 06:56:37 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:03:53.692 06:56:37 -- setup/common.sh@76 -- # local dev=nvme1n3 00:03:53.692 06:56:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:03:53.692 06:56:37 -- setup/common.sh@80 -- # echo 4294967296 00:03:53.692 06:56:37 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:53.692 06:56:37 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.692 06:56:37 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:53.692 06:56:37 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:53.692 06:56:37 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:53.692 06:56:37 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:53.692 06:56:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.692 06:56:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.692 06:56:37 -- common/autotest_common.sh@10 -- # set +x 00:03:53.692 ************************************ 00:03:53.692 START TEST nvme_mount 00:03:53.692 ************************************ 00:03:53.692 06:56:37 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:53.692 06:56:37 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:53.692 06:56:37 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:53.692 06:56:37 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.692 06:56:37 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.692 06:56:37 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:53.692 06:56:37 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:53.692 06:56:37 -- setup/common.sh@40 -- # local part_no=1 00:03:53.692 06:56:37 -- setup/common.sh@41 -- # local size=1073741824 00:03:53.692 06:56:37 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:53.692 06:56:37 -- setup/common.sh@44 -- # parts=() 00:03:53.692 06:56:37 -- setup/common.sh@44 -- # local parts 00:03:53.692 06:56:37 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:53.692 06:56:37 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.692 06:56:37 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:53.692 06:56:37 -- setup/common.sh@46 -- # (( part++ )) 00:03:53.692 06:56:37 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.692 06:56:37 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:53.692 06:56:37 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:53.692 06:56:37 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:54.628 Creating new GPT entries in memory. 00:03:54.628 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:54.628 other utilities. 00:03:54.628 06:56:38 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:54.628 06:56:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.628 06:56:38 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:54.628 06:56:38 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:54.628 06:56:38 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:56.008 Creating new GPT entries in memory. 00:03:56.008 The operation has completed successfully. 00:03:56.008 06:56:39 -- setup/common.sh@57 -- # (( part++ )) 00:03:56.008 06:56:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.008 06:56:39 -- setup/common.sh@62 -- # wait 53772 00:03:56.008 06:56:39 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.008 06:56:39 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:56.008 06:56:39 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.008 06:56:39 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:56.008 06:56:39 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:56.008 06:56:39 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.008 06:56:39 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:56.008 06:56:39 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:56.008 06:56:39 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:56.008 06:56:39 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.008 06:56:39 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:56.008 06:56:39 -- setup/devices.sh@53 -- # local found=0 00:03:56.008 06:56:39 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.008 06:56:39 -- setup/devices.sh@56 -- # : 00:03:56.008 06:56:39 -- setup/devices.sh@59 -- # local pci status 00:03:56.008 06:56:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.008 06:56:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:56.008 06:56:39 -- setup/devices.sh@47 -- # setup output config 00:03:56.008 06:56:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.008 06:56:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:56.008 06:56:39 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:56.008 06:56:39 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:56.008 06:56:39 -- setup/devices.sh@63 -- # found=1 00:03:56.008 06:56:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.008 06:56:39 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:56.008 06:56:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.266 06:56:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:56.266 06:56:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.525 06:56:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:56.525 06:56:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.525 06:56:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.525 06:56:40 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:56.525 06:56:40 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.525 06:56:40 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.525 06:56:40 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:56.525 06:56:40 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:56.525 06:56:40 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.525 06:56:40 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.525 06:56:40 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.525 06:56:40 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:56.525 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.525 06:56:40 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.525 06:56:40 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.785 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:56.785 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:56.785 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:56.785 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:56.785 06:56:40 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:56.785 06:56:40 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:56.785 06:56:40 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.785 06:56:40 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:56.785 06:56:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:56.785 06:56:40 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.785 06:56:40 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:56.785 06:56:40 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:56.785 06:56:40 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:56.785 06:56:40 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.785 06:56:40 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:56.785 06:56:40 -- setup/devices.sh@53 -- # local found=0 00:03:56.785 06:56:40 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.785 06:56:40 -- setup/devices.sh@56 -- # : 00:03:56.785 06:56:40 -- setup/devices.sh@59 -- # local pci status 00:03:56.785 06:56:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.785 06:56:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:56.785 06:56:40 -- setup/devices.sh@47 -- # setup output config 00:03:56.785 06:56:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.785 06:56:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:57.044 06:56:40 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:57.044 06:56:40 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:57.044 06:56:40 -- setup/devices.sh@63 -- # found=1 00:03:57.044 06:56:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.044 06:56:40 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:57.044 06:56:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.302 06:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:57.302 06:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.302 06:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:57.302 06:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.561 06:56:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.561 06:56:41 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:57.561 06:56:41 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:57.561 06:56:41 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:57.561 06:56:41 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:57.561 06:56:41 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:57.561 06:56:41 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:03:57.561 06:56:41 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:57.561 06:56:41 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:57.561 06:56:41 -- setup/devices.sh@50 -- # local mount_point= 00:03:57.561 06:56:41 -- setup/devices.sh@51 -- # local test_file= 00:03:57.561 06:56:41 -- setup/devices.sh@53 -- # local found=0 00:03:57.561 06:56:41 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:57.561 06:56:41 -- setup/devices.sh@59 -- # local pci status 00:03:57.561 06:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.561 06:56:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:57.561 06:56:41 -- setup/devices.sh@47 -- # setup output config 00:03:57.561 06:56:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.561 06:56:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:57.820 06:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:57.820 06:56:41 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:57.820 06:56:41 -- setup/devices.sh@63 -- # found=1 00:03:57.820 06:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.820 06:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:57.820 06:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.079 06:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:58.079 06:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.079 06:56:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:58.079 06:56:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.079 06:56:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.079 06:56:42 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:58.079 06:56:42 -- setup/devices.sh@68 -- # return 0 00:03:58.079 06:56:42 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:58.079 06:56:42 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.079 06:56:42 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.079 06:56:42 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:58.079 06:56:42 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:58.079 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:58.079 00:03:58.079 real 0m4.486s 00:03:58.079 user 0m0.993s 00:03:58.079 sys 0m1.175s 00:03:58.079 06:56:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.079 06:56:42 -- common/autotest_common.sh@10 -- # set +x 00:03:58.079 ************************************ 00:03:58.079 END TEST nvme_mount 00:03:58.079 ************************************ 00:03:58.338 06:56:42 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:58.338 06:56:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:58.338 06:56:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:58.338 06:56:42 -- common/autotest_common.sh@10 -- # set +x 00:03:58.338 ************************************ 00:03:58.338 START TEST dm_mount 00:03:58.338 ************************************ 00:03:58.338 06:56:42 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:58.338 06:56:42 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:58.338 06:56:42 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:58.338 06:56:42 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:58.338 06:56:42 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:58.338 06:56:42 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:58.338 06:56:42 -- setup/common.sh@40 -- # local part_no=2 00:03:58.338 06:56:42 -- setup/common.sh@41 -- # local size=1073741824 00:03:58.338 06:56:42 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:58.338 06:56:42 -- setup/common.sh@44 -- # parts=() 00:03:58.338 06:56:42 -- setup/common.sh@44 -- # local parts 00:03:58.338 06:56:42 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:58.338 06:56:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.338 06:56:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:58.338 06:56:42 -- setup/common.sh@46 -- # (( part++ )) 00:03:58.338 06:56:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.338 06:56:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:58.338 06:56:42 -- setup/common.sh@46 -- # (( part++ )) 00:03:58.338 06:56:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.338 06:56:42 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:58.338 06:56:42 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:58.338 06:56:42 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:59.274 Creating new GPT entries in memory. 00:03:59.274 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:59.274 other utilities. 00:03:59.274 06:56:43 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:59.274 06:56:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.274 06:56:43 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:59.274 06:56:43 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:59.274 06:56:43 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:00.208 Creating new GPT entries in memory. 00:04:00.208 The operation has completed successfully. 00:04:00.208 06:56:44 -- setup/common.sh@57 -- # (( part++ )) 00:04:00.208 06:56:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.208 06:56:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.208 06:56:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.208 06:56:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:01.603 The operation has completed successfully. 00:04:01.603 06:56:45 -- setup/common.sh@57 -- # (( part++ )) 00:04:01.603 06:56:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.603 06:56:45 -- setup/common.sh@62 -- # wait 54232 00:04:01.603 06:56:45 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:01.603 06:56:45 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.603 06:56:45 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:01.603 06:56:45 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:01.603 06:56:45 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:01.603 06:56:45 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:01.603 06:56:45 -- setup/devices.sh@161 -- # break 00:04:01.603 06:56:45 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:01.603 06:56:45 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:01.603 06:56:45 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:01.603 06:56:45 -- setup/devices.sh@166 -- # dm=dm-0 00:04:01.603 06:56:45 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:01.603 06:56:45 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:01.603 06:56:45 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.603 06:56:45 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:01.603 06:56:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.603 06:56:45 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:01.603 06:56:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:01.603 06:56:45 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.603 06:56:45 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:01.603 06:56:45 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:01.603 06:56:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:01.603 06:56:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.603 06:56:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:01.603 06:56:45 -- setup/devices.sh@53 -- # local found=0 00:04:01.603 06:56:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:01.603 06:56:45 -- setup/devices.sh@56 -- # : 00:04:01.603 06:56:45 -- setup/devices.sh@59 -- # local pci status 00:04:01.603 06:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.603 06:56:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:01.603 06:56:45 -- setup/devices.sh@47 -- # setup output config 00:04:01.603 06:56:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.603 06:56:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.603 06:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:01.603 06:56:45 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:01.603 06:56:45 -- setup/devices.sh@63 -- # found=1 00:04:01.603 06:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.604 06:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:01.604 06:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.862 06:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:01.862 06:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.121 06:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:02.121 06:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.121 06:56:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.121 06:56:45 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:02.121 06:56:45 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:02.121 06:56:46 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:02.121 06:56:46 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:02.121 06:56:46 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:02.121 06:56:46 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:02.121 06:56:46 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:02.121 06:56:46 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:02.121 06:56:46 -- setup/devices.sh@50 -- # local mount_point= 00:04:02.121 06:56:46 -- setup/devices.sh@51 -- # local test_file= 00:04:02.121 06:56:46 -- setup/devices.sh@53 -- # local found=0 00:04:02.121 06:56:46 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:02.121 06:56:46 -- setup/devices.sh@59 -- # local pci status 00:04:02.121 06:56:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.121 06:56:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:02.121 06:56:46 -- setup/devices.sh@47 -- # setup output config 00:04:02.121 06:56:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.121 06:56:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:02.379 06:56:46 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:02.379 06:56:46 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:02.379 06:56:46 -- setup/devices.sh@63 -- # found=1 00:04:02.379 06:56:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.379 06:56:46 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:02.379 06:56:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.637 06:56:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:02.637 06:56:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.637 06:56:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:02.637 06:56:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.637 06:56:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.637 06:56:46 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:02.637 06:56:46 -- setup/devices.sh@68 -- # return 0 00:04:02.637 06:56:46 -- setup/devices.sh@187 -- # cleanup_dm 00:04:02.637 06:56:46 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:02.637 06:56:46 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:02.637 06:56:46 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:02.637 06:56:46 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:02.637 06:56:46 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:02.896 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:02.896 06:56:46 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:02.896 06:56:46 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:02.896 00:04:02.896 real 0m4.531s 00:04:02.896 user 0m0.654s 00:04:02.896 sys 0m0.799s 00:04:02.896 06:56:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.896 06:56:46 -- common/autotest_common.sh@10 -- # set +x 00:04:02.896 ************************************ 00:04:02.896 END TEST dm_mount 00:04:02.896 ************************************ 00:04:02.896 06:56:46 -- setup/devices.sh@1 -- # cleanup 00:04:02.896 06:56:46 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:02.896 06:56:46 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.896 06:56:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:02.896 06:56:46 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:02.896 06:56:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:02.896 06:56:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:03.155 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:03.155 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:03.155 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:03.155 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:03.155 06:56:47 -- setup/devices.sh@12 -- # cleanup_dm 00:04:03.155 06:56:47 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:03.155 06:56:47 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:03.155 06:56:47 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.155 06:56:47 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:03.155 06:56:47 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:03.155 06:56:47 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:03.155 00:04:03.155 real 0m10.594s 00:04:03.155 user 0m2.300s 00:04:03.155 sys 0m2.590s 00:04:03.155 06:56:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.155 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:04:03.155 ************************************ 00:04:03.155 END TEST devices 00:04:03.155 ************************************ 00:04:03.155 00:04:03.155 real 0m22.096s 00:04:03.155 user 0m7.310s 00:04:03.155 sys 0m9.084s 00:04:03.155 ************************************ 00:04:03.155 END TEST setup.sh 00:04:03.155 ************************************ 00:04:03.155 06:56:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.155 06:56:47 -- common/autotest_common.sh@10 -- # set +x 00:04:03.155 06:56:47 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:03.413 Hugepages 00:04:03.413 node hugesize free / total 00:04:03.413 node0 1048576kB 0 / 0 00:04:03.413 node0 2048kB 2048 / 2048 00:04:03.413 00:04:03.413 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:03.413 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:03.413 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:03.672 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:03.672 06:56:47 -- spdk/autotest.sh@141 -- # uname -s 00:04:03.672 06:56:47 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:03.672 06:56:47 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:03.672 06:56:47 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.237 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.495 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.495 06:56:48 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:05.429 06:56:49 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:05.429 06:56:49 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:05.429 06:56:49 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:05.429 06:56:49 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:05.429 06:56:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:05.429 06:56:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:05.429 06:56:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.429 06:56:49 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:05.429 06:56:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:05.429 06:56:49 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:05.429 06:56:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:05.429 06:56:49 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.996 Waiting for block devices as requested 00:04:05.996 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.996 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.996 06:56:50 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:05.996 06:56:50 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:05.996 06:56:50 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:04:05.996 06:56:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:05.996 06:56:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:05.996 06:56:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:05.996 06:56:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:05.996 06:56:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:05.996 06:56:50 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:05.996 06:56:50 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:05.996 06:56:50 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:05.996 06:56:50 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:05.996 06:56:50 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:05.996 06:56:50 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:05.996 06:56:50 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:05.996 06:56:50 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:05.996 06:56:50 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:05.996 06:56:50 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:05.996 06:56:50 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:05.996 06:56:50 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:05.996 06:56:50 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:05.996 06:56:50 -- common/autotest_common.sh@1542 -- # continue 00:04:05.996 06:56:50 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:05.996 06:56:50 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:05.996 06:56:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:05.996 06:56:50 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:04:05.996 06:56:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:05.996 06:56:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:04:05.996 06:56:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:05.996 06:56:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:05.996 06:56:50 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:04:05.996 06:56:50 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:04:05.996 06:56:50 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:04:05.996 06:56:50 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:06.255 06:56:50 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:06.255 06:56:50 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:06.255 06:56:50 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:06.255 06:56:50 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:06.255 06:56:50 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:04:06.255 06:56:50 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:06.255 06:56:50 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:06.255 06:56:50 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:06.255 06:56:50 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:06.255 06:56:50 -- common/autotest_common.sh@1542 -- # continue 00:04:06.255 06:56:50 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:06.255 06:56:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:06.255 06:56:50 -- common/autotest_common.sh@10 -- # set +x 00:04:06.255 06:56:50 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:06.255 06:56:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:06.255 06:56:50 -- common/autotest_common.sh@10 -- # set +x 00:04:06.255 06:56:50 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.085 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.085 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.085 06:56:50 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:07.085 06:56:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:07.085 06:56:50 -- common/autotest_common.sh@10 -- # set +x 00:04:07.085 06:56:51 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:07.085 06:56:51 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:07.085 06:56:51 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:07.085 06:56:51 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:07.085 06:56:51 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:07.085 06:56:51 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:07.085 06:56:51 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:07.085 06:56:51 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:07.085 06:56:51 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:07.085 06:56:51 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:07.085 06:56:51 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:07.085 06:56:51 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:07.085 06:56:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:07.085 06:56:51 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:07.085 06:56:51 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:07.085 06:56:51 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:07.085 06:56:51 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:07.085 06:56:51 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:07.085 06:56:51 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:07.085 06:56:51 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:07.085 06:56:51 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:07.085 06:56:51 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:07.085 06:56:51 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:07.085 06:56:51 -- common/autotest_common.sh@1578 -- # return 0 00:04:07.085 06:56:51 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:07.085 06:56:51 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:07.085 06:56:51 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:07.085 06:56:51 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:07.085 06:56:51 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:07.085 06:56:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:07.085 06:56:51 -- common/autotest_common.sh@10 -- # set +x 00:04:07.085 06:56:51 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:07.085 06:56:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.085 06:56:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.085 06:56:51 -- common/autotest_common.sh@10 -- # set +x 00:04:07.085 ************************************ 00:04:07.085 START TEST env 00:04:07.085 ************************************ 00:04:07.085 06:56:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:07.344 * Looking for test storage... 00:04:07.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:07.344 06:56:51 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:07.344 06:56:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.344 06:56:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.344 06:56:51 -- common/autotest_common.sh@10 -- # set +x 00:04:07.344 ************************************ 00:04:07.344 START TEST env_memory 00:04:07.344 ************************************ 00:04:07.344 06:56:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:07.344 00:04:07.344 00:04:07.344 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.344 http://cunit.sourceforge.net/ 00:04:07.344 00:04:07.344 00:04:07.344 Suite: memory 00:04:07.344 Test: alloc and free memory map ...[2024-07-11 06:56:51.273916] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:07.344 passed 00:04:07.344 Test: mem map translation ...[2024-07-11 06:56:51.304923] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:07.344 [2024-07-11 06:56:51.304958] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:07.344 [2024-07-11 06:56:51.305014] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:07.344 [2024-07-11 06:56:51.305024] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:07.344 passed 00:04:07.344 Test: mem map registration ...[2024-07-11 06:56:51.368985] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:07.344 [2024-07-11 06:56:51.369014] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:07.344 passed 00:04:07.603 Test: mem map adjacent registrations ...passed 00:04:07.603 00:04:07.603 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.603 suites 1 1 n/a 0 0 00:04:07.603 tests 4 4 4 0 0 00:04:07.603 asserts 152 152 152 0 n/a 00:04:07.603 00:04:07.603 Elapsed time = 0.214 seconds 00:04:07.603 00:04:07.603 real 0m0.233s 00:04:07.603 user 0m0.211s 00:04:07.603 sys 0m0.018s 00:04:07.603 06:56:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.603 06:56:51 -- common/autotest_common.sh@10 -- # set +x 00:04:07.603 ************************************ 00:04:07.603 END TEST env_memory 00:04:07.603 ************************************ 00:04:07.603 06:56:51 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:07.603 06:56:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.603 06:56:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.603 06:56:51 -- common/autotest_common.sh@10 -- # set +x 00:04:07.603 ************************************ 00:04:07.603 START TEST env_vtophys 00:04:07.603 ************************************ 00:04:07.603 06:56:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:07.603 EAL: lib.eal log level changed from notice to debug 00:04:07.603 EAL: Detected lcore 0 as core 0 on socket 0 00:04:07.603 EAL: Detected lcore 1 as core 0 on socket 0 00:04:07.603 EAL: Detected lcore 2 as core 0 on socket 0 00:04:07.603 EAL: Detected lcore 3 as core 0 on socket 0 00:04:07.603 EAL: Detected lcore 4 as core 0 on socket 0 00:04:07.603 EAL: Detected lcore 5 as core 0 on socket 0 00:04:07.603 EAL: Detected lcore 6 as core 0 on socket 0 00:04:07.603 EAL: Detected lcore 7 as core 0 on socket 0 00:04:07.603 EAL: Detected lcore 8 as core 0 on socket 0 00:04:07.603 EAL: Detected lcore 9 as core 0 on socket 0 00:04:07.603 EAL: Maximum logical cores by configuration: 128 00:04:07.603 EAL: Detected CPU lcores: 10 00:04:07.603 EAL: Detected NUMA nodes: 1 00:04:07.603 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:07.603 EAL: Detected shared linkage of DPDK 00:04:07.603 EAL: No shared files mode enabled, IPC will be disabled 00:04:07.603 EAL: Selected IOVA mode 'PA' 00:04:07.603 EAL: Probing VFIO support... 00:04:07.603 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:07.603 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:07.603 EAL: Ask a virtual area of 0x2e000 bytes 00:04:07.603 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:07.603 EAL: Setting up physically contiguous memory... 00:04:07.603 EAL: Setting maximum number of open files to 524288 00:04:07.603 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:07.603 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:07.603 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.603 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:07.603 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.603 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.603 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:07.603 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:07.603 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.603 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:07.603 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.603 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.603 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:07.603 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:07.603 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.603 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:07.603 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.603 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.603 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:07.603 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:07.603 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.603 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:07.603 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.603 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.603 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:07.603 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:07.603 EAL: Hugepages will be freed exactly as allocated. 00:04:07.603 EAL: No shared files mode enabled, IPC is disabled 00:04:07.603 EAL: No shared files mode enabled, IPC is disabled 00:04:07.603 EAL: TSC frequency is ~2200000 KHz 00:04:07.603 EAL: Main lcore 0 is ready (tid=7fecc6c40a00;cpuset=[0]) 00:04:07.603 EAL: Trying to obtain current memory policy. 00:04:07.603 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.603 EAL: Restoring previous memory policy: 0 00:04:07.603 EAL: request: mp_malloc_sync 00:04:07.603 EAL: No shared files mode enabled, IPC is disabled 00:04:07.603 EAL: Heap on socket 0 was expanded by 2MB 00:04:07.603 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:07.603 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:07.603 EAL: Mem event callback 'spdk:(nil)' registered 00:04:07.603 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:07.862 00:04:07.862 00:04:07.862 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.862 http://cunit.sourceforge.net/ 00:04:07.862 00:04:07.862 00:04:07.862 Suite: components_suite 00:04:07.862 Test: vtophys_malloc_test ...passed 00:04:07.862 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.862 EAL: Restoring previous memory policy: 4 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.862 EAL: Trying to obtain current memory policy. 00:04:07.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.862 EAL: Restoring previous memory policy: 4 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.862 EAL: Trying to obtain current memory policy. 00:04:07.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.862 EAL: Restoring previous memory policy: 4 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.862 EAL: Trying to obtain current memory policy. 00:04:07.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.862 EAL: Restoring previous memory policy: 4 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.862 EAL: Trying to obtain current memory policy. 00:04:07.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.862 EAL: Restoring previous memory policy: 4 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was expanded by 34MB 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.862 EAL: Trying to obtain current memory policy. 00:04:07.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.862 EAL: Restoring previous memory policy: 4 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.862 EAL: Trying to obtain current memory policy. 00:04:07.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.862 EAL: Restoring previous memory policy: 4 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was expanded by 130MB 00:04:07.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.862 EAL: request: mp_malloc_sync 00:04:07.862 EAL: No shared files mode enabled, IPC is disabled 00:04:07.862 EAL: Heap on socket 0 was shrunk by 130MB 00:04:07.862 EAL: Trying to obtain current memory policy. 00:04:07.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.121 EAL: Restoring previous memory policy: 4 00:04:08.121 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.121 EAL: request: mp_malloc_sync 00:04:08.121 EAL: No shared files mode enabled, IPC is disabled 00:04:08.121 EAL: Heap on socket 0 was expanded by 258MB 00:04:08.121 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.121 EAL: request: mp_malloc_sync 00:04:08.121 EAL: No shared files mode enabled, IPC is disabled 00:04:08.121 EAL: Heap on socket 0 was shrunk by 258MB 00:04:08.121 EAL: Trying to obtain current memory policy. 00:04:08.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.379 EAL: Restoring previous memory policy: 4 00:04:08.379 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.379 EAL: request: mp_malloc_sync 00:04:08.379 EAL: No shared files mode enabled, IPC is disabled 00:04:08.379 EAL: Heap on socket 0 was expanded by 514MB 00:04:08.379 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.637 EAL: request: mp_malloc_sync 00:04:08.637 EAL: No shared files mode enabled, IPC is disabled 00:04:08.637 EAL: Heap on socket 0 was shrunk by 514MB 00:04:08.637 EAL: Trying to obtain current memory policy. 00:04:08.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.896 EAL: Restoring previous memory policy: 4 00:04:08.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.896 EAL: request: mp_malloc_sync 00:04:08.896 EAL: No shared files mode enabled, IPC is disabled 00:04:08.896 EAL: Heap on socket 0 was expanded by 1026MB 00:04:09.471 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.471 EAL: request: mp_malloc_sync 00:04:09.471 EAL: No shared files mode enabled, IPC is disabled 00:04:09.471 passed 00:04:09.471 00:04:09.471 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.471 suites 1 1 n/a 0 0 00:04:09.471 tests 2 2 2 0 0 00:04:09.471 asserts 5148 5148 5148 0 n/a 00:04:09.471 00:04:09.471 Elapsed time = 1.805 seconds 00:04:09.471 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:09.471 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.471 EAL: request: mp_malloc_sync 00:04:09.471 EAL: No shared files mode enabled, IPC is disabled 00:04:09.471 EAL: Heap on socket 0 was shrunk by 2MB 00:04:09.471 EAL: No shared files mode enabled, IPC is disabled 00:04:09.471 EAL: No shared files mode enabled, IPC is disabled 00:04:09.471 EAL: No shared files mode enabled, IPC is disabled 00:04:09.741 00:04:09.741 real 0m2.016s 00:04:09.741 user 0m1.169s 00:04:09.741 sys 0m0.709s 00:04:09.741 06:56:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.741 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:04:09.741 ************************************ 00:04:09.741 END TEST env_vtophys 00:04:09.741 ************************************ 00:04:09.741 06:56:53 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:09.741 06:56:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.741 06:56:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.741 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:04:09.741 ************************************ 00:04:09.741 START TEST env_pci 00:04:09.741 ************************************ 00:04:09.741 06:56:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:09.741 00:04:09.741 00:04:09.741 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.741 http://cunit.sourceforge.net/ 00:04:09.741 00:04:09.741 00:04:09.741 Suite: pci 00:04:09.741 Test: pci_hook ...[2024-07-11 06:56:53.592386] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55367 has claimed it 00:04:09.741 passed 00:04:09.741 00:04:09.741 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.741 suites 1 1 n/a 0 0 00:04:09.741 tests 1 1 1 0 0 00:04:09.741 asserts 25 25 25 0 n/a 00:04:09.741 00:04:09.741 Elapsed time = 0.002EAL: Cannot find device (10000:00:01.0) 00:04:09.741 EAL: Failed to attach device on primary process 00:04:09.741 seconds 00:04:09.741 00:04:09.741 real 0m0.019s 00:04:09.741 user 0m0.010s 00:04:09.741 sys 0m0.009s 00:04:09.741 06:56:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.741 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:04:09.741 ************************************ 00:04:09.741 END TEST env_pci 00:04:09.741 ************************************ 00:04:09.741 06:56:53 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:09.741 06:56:53 -- env/env.sh@15 -- # uname 00:04:09.741 06:56:53 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:09.741 06:56:53 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:09.741 06:56:53 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.741 06:56:53 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:09.741 06:56:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.741 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:04:09.741 ************************************ 00:04:09.741 START TEST env_dpdk_post_init 00:04:09.741 ************************************ 00:04:09.741 06:56:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.741 EAL: Detected CPU lcores: 10 00:04:09.741 EAL: Detected NUMA nodes: 1 00:04:09.741 EAL: Detected shared linkage of DPDK 00:04:09.741 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:09.741 EAL: Selected IOVA mode 'PA' 00:04:09.741 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:09.999 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:09.999 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:09.999 Starting DPDK initialization... 00:04:09.999 Starting SPDK post initialization... 00:04:09.999 SPDK NVMe probe 00:04:09.999 Attaching to 0000:00:06.0 00:04:09.999 Attaching to 0000:00:07.0 00:04:09.999 Attached to 0000:00:06.0 00:04:09.999 Attached to 0000:00:07.0 00:04:09.999 Cleaning up... 00:04:09.999 00:04:09.999 real 0m0.178s 00:04:09.999 user 0m0.048s 00:04:09.999 sys 0m0.031s 00:04:09.999 06:56:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.999 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:04:09.999 ************************************ 00:04:09.999 END TEST env_dpdk_post_init 00:04:09.999 ************************************ 00:04:09.999 06:56:53 -- env/env.sh@26 -- # uname 00:04:09.999 06:56:53 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:09.999 06:56:53 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:09.999 06:56:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.999 06:56:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.999 06:56:53 -- common/autotest_common.sh@10 -- # set +x 00:04:09.999 ************************************ 00:04:10.000 START TEST env_mem_callbacks 00:04:10.000 ************************************ 00:04:10.000 06:56:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:10.000 EAL: Detected CPU lcores: 10 00:04:10.000 EAL: Detected NUMA nodes: 1 00:04:10.000 EAL: Detected shared linkage of DPDK 00:04:10.000 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.000 EAL: Selected IOVA mode 'PA' 00:04:10.000 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.000 00:04:10.000 00:04:10.000 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.000 http://cunit.sourceforge.net/ 00:04:10.000 00:04:10.000 00:04:10.000 Suite: memory 00:04:10.000 Test: test ... 00:04:10.000 register 0x200000200000 2097152 00:04:10.000 malloc 3145728 00:04:10.000 register 0x200000400000 4194304 00:04:10.000 buf 0x200000500000 len 3145728 PASSED 00:04:10.000 malloc 64 00:04:10.000 buf 0x2000004fff40 len 64 PASSED 00:04:10.000 malloc 4194304 00:04:10.000 register 0x200000800000 6291456 00:04:10.000 buf 0x200000a00000 len 4194304 PASSED 00:04:10.000 free 0x200000500000 3145728 00:04:10.000 free 0x2000004fff40 64 00:04:10.000 unregister 0x200000400000 4194304 PASSED 00:04:10.000 free 0x200000a00000 4194304 00:04:10.000 unregister 0x200000800000 6291456 PASSED 00:04:10.000 malloc 8388608 00:04:10.000 register 0x200000400000 10485760 00:04:10.000 buf 0x200000600000 len 8388608 PASSED 00:04:10.000 free 0x200000600000 8388608 00:04:10.000 unregister 0x200000400000 10485760 PASSED 00:04:10.000 passed 00:04:10.000 00:04:10.000 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.000 suites 1 1 n/a 0 0 00:04:10.000 tests 1 1 1 0 0 00:04:10.000 asserts 15 15 15 0 n/a 00:04:10.000 00:04:10.000 Elapsed time = 0.010 seconds 00:04:10.000 00:04:10.000 real 0m0.146s 00:04:10.000 user 0m0.015s 00:04:10.000 sys 0m0.029s 00:04:10.000 06:56:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.000 06:56:54 -- common/autotest_common.sh@10 -- # set +x 00:04:10.000 ************************************ 00:04:10.000 END TEST env_mem_callbacks 00:04:10.000 ************************************ 00:04:10.257 00:04:10.257 real 0m2.947s 00:04:10.257 user 0m1.572s 00:04:10.257 sys 0m1.017s 00:04:10.257 06:56:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.257 ************************************ 00:04:10.257 06:56:54 -- common/autotest_common.sh@10 -- # set +x 00:04:10.257 END TEST env 00:04:10.257 ************************************ 00:04:10.257 06:56:54 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:10.257 06:56:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.257 06:56:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.257 06:56:54 -- common/autotest_common.sh@10 -- # set +x 00:04:10.257 ************************************ 00:04:10.257 START TEST rpc 00:04:10.257 ************************************ 00:04:10.257 06:56:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:10.257 * Looking for test storage... 00:04:10.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.257 06:56:54 -- rpc/rpc.sh@65 -- # spdk_pid=55475 00:04:10.257 06:56:54 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.257 06:56:54 -- rpc/rpc.sh@67 -- # waitforlisten 55475 00:04:10.257 06:56:54 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:10.257 06:56:54 -- common/autotest_common.sh@819 -- # '[' -z 55475 ']' 00:04:10.257 06:56:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.257 06:56:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:10.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.257 06:56:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.257 06:56:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:10.257 06:56:54 -- common/autotest_common.sh@10 -- # set +x 00:04:10.258 [2024-07-11 06:56:54.272078] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:10.258 [2024-07-11 06:56:54.272745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55475 ] 00:04:10.515 [2024-07-11 06:56:54.413131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.515 [2024-07-11 06:56:54.557115] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:10.515 [2024-07-11 06:56:54.557312] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:10.515 [2024-07-11 06:56:54.557327] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 55475' to capture a snapshot of events at runtime. 00:04:10.515 [2024-07-11 06:56:54.557337] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid55475 for offline analysis/debug. 00:04:10.515 [2024-07-11 06:56:54.557405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.448 06:56:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:11.448 06:56:55 -- common/autotest_common.sh@852 -- # return 0 00:04:11.448 06:56:55 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.448 06:56:55 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.448 06:56:55 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:11.448 06:56:55 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:11.448 06:56:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.448 06:56:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.448 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.448 ************************************ 00:04:11.448 START TEST rpc_integrity 00:04:11.448 ************************************ 00:04:11.448 06:56:55 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:11.448 06:56:55 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:11.448 06:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.448 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.448 06:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.449 06:56:55 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:11.449 06:56:55 -- rpc/rpc.sh@13 -- # jq length 00:04:11.449 06:56:55 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.449 06:56:55 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.449 06:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.449 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.449 06:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.449 06:56:55 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:11.449 06:56:55 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.449 06:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.449 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.449 06:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.449 06:56:55 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.449 { 00:04:11.449 "aliases": [ 00:04:11.449 "bf1b5c6f-835a-4177-90ca-f08182a7384a" 00:04:11.449 ], 00:04:11.449 "assigned_rate_limits": { 00:04:11.449 "r_mbytes_per_sec": 0, 00:04:11.449 "rw_ios_per_sec": 0, 00:04:11.449 "rw_mbytes_per_sec": 0, 00:04:11.449 "w_mbytes_per_sec": 0 00:04:11.449 }, 00:04:11.449 "block_size": 512, 00:04:11.449 "claimed": false, 00:04:11.449 "driver_specific": {}, 00:04:11.449 "memory_domains": [ 00:04:11.449 { 00:04:11.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.449 "dma_device_type": 2 00:04:11.449 } 00:04:11.449 ], 00:04:11.449 "name": "Malloc0", 00:04:11.449 "num_blocks": 16384, 00:04:11.449 "product_name": "Malloc disk", 00:04:11.449 "supported_io_types": { 00:04:11.449 "abort": true, 00:04:11.449 "compare": false, 00:04:11.449 "compare_and_write": false, 00:04:11.449 "flush": true, 00:04:11.449 "nvme_admin": false, 00:04:11.449 "nvme_io": false, 00:04:11.449 "read": true, 00:04:11.449 "reset": true, 00:04:11.449 "unmap": true, 00:04:11.449 "write": true, 00:04:11.449 "write_zeroes": true 00:04:11.449 }, 00:04:11.449 "uuid": "bf1b5c6f-835a-4177-90ca-f08182a7384a", 00:04:11.449 "zoned": false 00:04:11.449 } 00:04:11.449 ]' 00:04:11.449 06:56:55 -- rpc/rpc.sh@17 -- # jq length 00:04:11.449 06:56:55 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.449 06:56:55 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:11.449 06:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.449 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.449 [2024-07-11 06:56:55.423471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:11.449 [2024-07-11 06:56:55.423551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.449 [2024-07-11 06:56:55.423569] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1863300 00:04:11.449 [2024-07-11 06:56:55.423580] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.449 [2024-07-11 06:56:55.425083] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.449 [2024-07-11 06:56:55.425115] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:11.449 Passthru0 00:04:11.449 06:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.449 06:56:55 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:11.449 06:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.449 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.449 06:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.449 06:56:55 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:11.449 { 00:04:11.449 "aliases": [ 00:04:11.449 "bf1b5c6f-835a-4177-90ca-f08182a7384a" 00:04:11.449 ], 00:04:11.449 "assigned_rate_limits": { 00:04:11.449 "r_mbytes_per_sec": 0, 00:04:11.449 "rw_ios_per_sec": 0, 00:04:11.449 "rw_mbytes_per_sec": 0, 00:04:11.449 "w_mbytes_per_sec": 0 00:04:11.449 }, 00:04:11.449 "block_size": 512, 00:04:11.449 "claim_type": "exclusive_write", 00:04:11.449 "claimed": true, 00:04:11.449 "driver_specific": {}, 00:04:11.449 "memory_domains": [ 00:04:11.449 { 00:04:11.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.449 "dma_device_type": 2 00:04:11.449 } 00:04:11.449 ], 00:04:11.449 "name": "Malloc0", 00:04:11.449 "num_blocks": 16384, 00:04:11.449 "product_name": "Malloc disk", 00:04:11.449 "supported_io_types": { 00:04:11.449 "abort": true, 00:04:11.449 "compare": false, 00:04:11.449 "compare_and_write": false, 00:04:11.449 "flush": true, 00:04:11.449 "nvme_admin": false, 00:04:11.449 "nvme_io": false, 00:04:11.449 "read": true, 00:04:11.449 "reset": true, 00:04:11.449 "unmap": true, 00:04:11.449 "write": true, 00:04:11.449 "write_zeroes": true 00:04:11.449 }, 00:04:11.449 "uuid": "bf1b5c6f-835a-4177-90ca-f08182a7384a", 00:04:11.449 "zoned": false 00:04:11.449 }, 00:04:11.449 { 00:04:11.449 "aliases": [ 00:04:11.449 "8195e34a-9bfc-55b6-96c2-ccc8096b14b8" 00:04:11.449 ], 00:04:11.449 "assigned_rate_limits": { 00:04:11.449 "r_mbytes_per_sec": 0, 00:04:11.449 "rw_ios_per_sec": 0, 00:04:11.449 "rw_mbytes_per_sec": 0, 00:04:11.449 "w_mbytes_per_sec": 0 00:04:11.449 }, 00:04:11.449 "block_size": 512, 00:04:11.449 "claimed": false, 00:04:11.449 "driver_specific": { 00:04:11.449 "passthru": { 00:04:11.449 "base_bdev_name": "Malloc0", 00:04:11.449 "name": "Passthru0" 00:04:11.449 } 00:04:11.449 }, 00:04:11.449 "memory_domains": [ 00:04:11.449 { 00:04:11.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.449 "dma_device_type": 2 00:04:11.449 } 00:04:11.449 ], 00:04:11.449 "name": "Passthru0", 00:04:11.449 "num_blocks": 16384, 00:04:11.449 "product_name": "passthru", 00:04:11.449 "supported_io_types": { 00:04:11.449 "abort": true, 00:04:11.449 "compare": false, 00:04:11.449 "compare_and_write": false, 00:04:11.449 "flush": true, 00:04:11.449 "nvme_admin": false, 00:04:11.449 "nvme_io": false, 00:04:11.449 "read": true, 00:04:11.449 "reset": true, 00:04:11.449 "unmap": true, 00:04:11.449 "write": true, 00:04:11.449 "write_zeroes": true 00:04:11.449 }, 00:04:11.449 "uuid": "8195e34a-9bfc-55b6-96c2-ccc8096b14b8", 00:04:11.449 "zoned": false 00:04:11.449 } 00:04:11.449 ]' 00:04:11.449 06:56:55 -- rpc/rpc.sh@21 -- # jq length 00:04:11.707 06:56:55 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:11.707 06:56:55 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:11.707 06:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.707 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.707 06:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.707 06:56:55 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:11.707 06:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.707 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.707 06:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.707 06:56:55 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:11.708 06:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.708 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.708 06:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.708 06:56:55 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:11.708 06:56:55 -- rpc/rpc.sh@26 -- # jq length 00:04:11.708 06:56:55 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:11.708 00:04:11.708 real 0m0.340s 00:04:11.708 user 0m0.224s 00:04:11.708 sys 0m0.037s 00:04:11.708 06:56:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.708 ************************************ 00:04:11.708 END TEST rpc_integrity 00:04:11.708 ************************************ 00:04:11.708 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.708 06:56:55 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:11.708 06:56:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.708 06:56:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.708 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.708 ************************************ 00:04:11.708 START TEST rpc_plugins 00:04:11.708 ************************************ 00:04:11.708 06:56:55 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:11.708 06:56:55 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:11.708 06:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.708 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.708 06:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.708 06:56:55 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:11.708 06:56:55 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:11.708 06:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.708 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.708 06:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.708 06:56:55 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:11.708 { 00:04:11.708 "aliases": [ 00:04:11.708 "dec54303-2a57-41fd-813c-f77f5ab26ab2" 00:04:11.708 ], 00:04:11.708 "assigned_rate_limits": { 00:04:11.708 "r_mbytes_per_sec": 0, 00:04:11.708 "rw_ios_per_sec": 0, 00:04:11.708 "rw_mbytes_per_sec": 0, 00:04:11.708 "w_mbytes_per_sec": 0 00:04:11.708 }, 00:04:11.708 "block_size": 4096, 00:04:11.708 "claimed": false, 00:04:11.708 "driver_specific": {}, 00:04:11.708 "memory_domains": [ 00:04:11.708 { 00:04:11.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.708 "dma_device_type": 2 00:04:11.708 } 00:04:11.708 ], 00:04:11.708 "name": "Malloc1", 00:04:11.708 "num_blocks": 256, 00:04:11.708 "product_name": "Malloc disk", 00:04:11.708 "supported_io_types": { 00:04:11.708 "abort": true, 00:04:11.708 "compare": false, 00:04:11.708 "compare_and_write": false, 00:04:11.708 "flush": true, 00:04:11.708 "nvme_admin": false, 00:04:11.708 "nvme_io": false, 00:04:11.708 "read": true, 00:04:11.708 "reset": true, 00:04:11.708 "unmap": true, 00:04:11.708 "write": true, 00:04:11.708 "write_zeroes": true 00:04:11.708 }, 00:04:11.708 "uuid": "dec54303-2a57-41fd-813c-f77f5ab26ab2", 00:04:11.708 "zoned": false 00:04:11.708 } 00:04:11.708 ]' 00:04:11.708 06:56:55 -- rpc/rpc.sh@32 -- # jq length 00:04:11.708 06:56:55 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:11.708 06:56:55 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:11.708 06:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.708 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.708 06:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.708 06:56:55 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:11.708 06:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.708 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.708 06:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.708 06:56:55 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:11.708 06:56:55 -- rpc/rpc.sh@36 -- # jq length 00:04:11.966 06:56:55 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:11.966 00:04:11.966 real 0m0.161s 00:04:11.966 user 0m0.108s 00:04:11.966 sys 0m0.021s 00:04:11.966 06:56:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.966 ************************************ 00:04:11.966 END TEST rpc_plugins 00:04:11.966 ************************************ 00:04:11.966 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.966 06:56:55 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:11.966 06:56:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.966 06:56:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.966 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.966 ************************************ 00:04:11.966 START TEST rpc_trace_cmd_test 00:04:11.966 ************************************ 00:04:11.966 06:56:55 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:11.966 06:56:55 -- rpc/rpc.sh@40 -- # local info 00:04:11.966 06:56:55 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:11.966 06:56:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.966 06:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.966 06:56:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.966 06:56:55 -- rpc/rpc.sh@42 -- # info='{ 00:04:11.966 "bdev": { 00:04:11.966 "mask": "0x8", 00:04:11.966 "tpoint_mask": "0xffffffffffffffff" 00:04:11.966 }, 00:04:11.966 "bdev_nvme": { 00:04:11.966 "mask": "0x4000", 00:04:11.966 "tpoint_mask": "0x0" 00:04:11.966 }, 00:04:11.966 "blobfs": { 00:04:11.966 "mask": "0x80", 00:04:11.966 "tpoint_mask": "0x0" 00:04:11.966 }, 00:04:11.966 "dsa": { 00:04:11.966 "mask": "0x200", 00:04:11.966 "tpoint_mask": "0x0" 00:04:11.966 }, 00:04:11.966 "ftl": { 00:04:11.966 "mask": "0x40", 00:04:11.966 "tpoint_mask": "0x0" 00:04:11.966 }, 00:04:11.966 "iaa": { 00:04:11.966 "mask": "0x1000", 00:04:11.966 "tpoint_mask": "0x0" 00:04:11.966 }, 00:04:11.966 "iscsi_conn": { 00:04:11.966 "mask": "0x2", 00:04:11.966 "tpoint_mask": "0x0" 00:04:11.966 }, 00:04:11.966 "nvme_pcie": { 00:04:11.966 "mask": "0x800", 00:04:11.966 "tpoint_mask": "0x0" 00:04:11.966 }, 00:04:11.966 "nvme_tcp": { 00:04:11.966 "mask": "0x2000", 00:04:11.966 "tpoint_mask": "0x0" 00:04:11.966 }, 00:04:11.966 "nvmf_rdma": { 00:04:11.966 "mask": "0x10", 00:04:11.966 "tpoint_mask": "0x0" 00:04:11.966 }, 00:04:11.967 "nvmf_tcp": { 00:04:11.967 "mask": "0x20", 00:04:11.967 "tpoint_mask": "0x0" 00:04:11.967 }, 00:04:11.967 "scsi": { 00:04:11.967 "mask": "0x4", 00:04:11.967 "tpoint_mask": "0x0" 00:04:11.967 }, 00:04:11.967 "thread": { 00:04:11.967 "mask": "0x400", 00:04:11.967 "tpoint_mask": "0x0" 00:04:11.967 }, 00:04:11.967 "tpoint_group_mask": "0x8", 00:04:11.967 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid55475" 00:04:11.967 }' 00:04:11.967 06:56:55 -- rpc/rpc.sh@43 -- # jq length 00:04:11.967 06:56:55 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:11.967 06:56:55 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:11.967 06:56:55 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:11.967 06:56:55 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:12.225 06:56:56 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:12.225 06:56:56 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:12.225 06:56:56 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:12.225 06:56:56 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:12.225 06:56:56 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:12.225 00:04:12.225 real 0m0.291s 00:04:12.225 user 0m0.250s 00:04:12.225 sys 0m0.031s 00:04:12.225 06:56:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.225 ************************************ 00:04:12.225 END TEST rpc_trace_cmd_test 00:04:12.225 ************************************ 00:04:12.225 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.225 06:56:56 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:12.225 06:56:56 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:12.225 06:56:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:12.225 06:56:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.225 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.225 ************************************ 00:04:12.225 START TEST go_rpc 00:04:12.225 ************************************ 00:04:12.225 06:56:56 -- common/autotest_common.sh@1104 -- # go_rpc 00:04:12.225 06:56:56 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:12.225 06:56:56 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:12.225 06:56:56 -- rpc/rpc.sh@52 -- # jq length 00:04:12.225 06:56:56 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:12.225 06:56:56 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:12.225 06:56:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:12.225 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.485 06:56:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:12.485 06:56:56 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:12.485 06:56:56 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:12.485 06:56:56 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["b86ad7e5-f433-48de-a2e0-e7cec879bfd3"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"b86ad7e5-f433-48de-a2e0-e7cec879bfd3","zoned":false}]' 00:04:12.485 06:56:56 -- rpc/rpc.sh@57 -- # jq length 00:04:12.485 06:56:56 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:12.485 06:56:56 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:12.485 06:56:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:12.485 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.485 06:56:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:12.485 06:56:56 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:12.485 06:56:56 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:12.485 06:56:56 -- rpc/rpc.sh@61 -- # jq length 00:04:12.485 06:56:56 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:12.485 00:04:12.485 real 0m0.228s 00:04:12.485 user 0m0.152s 00:04:12.485 sys 0m0.039s 00:04:12.485 ************************************ 00:04:12.485 END TEST go_rpc 00:04:12.485 06:56:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.485 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.485 ************************************ 00:04:12.485 06:56:56 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:12.485 06:56:56 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:12.485 06:56:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:12.485 06:56:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.485 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.485 ************************************ 00:04:12.485 START TEST rpc_daemon_integrity 00:04:12.485 ************************************ 00:04:12.485 06:56:56 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:12.485 06:56:56 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:12.485 06:56:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:12.485 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.485 06:56:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:12.485 06:56:56 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:12.485 06:56:56 -- rpc/rpc.sh@13 -- # jq length 00:04:12.744 06:56:56 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:12.744 06:56:56 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:12.744 06:56:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:12.744 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.744 06:56:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:12.744 06:56:56 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:12.744 06:56:56 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:12.744 06:56:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:12.744 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.744 06:56:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:12.744 06:56:56 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:12.744 { 00:04:12.744 "aliases": [ 00:04:12.744 "3a6b8f74-3915-43b2-865c-83aff231ef9b" 00:04:12.744 ], 00:04:12.744 "assigned_rate_limits": { 00:04:12.744 "r_mbytes_per_sec": 0, 00:04:12.744 "rw_ios_per_sec": 0, 00:04:12.744 "rw_mbytes_per_sec": 0, 00:04:12.744 "w_mbytes_per_sec": 0 00:04:12.744 }, 00:04:12.744 "block_size": 512, 00:04:12.744 "claimed": false, 00:04:12.744 "driver_specific": {}, 00:04:12.744 "memory_domains": [ 00:04:12.744 { 00:04:12.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.744 "dma_device_type": 2 00:04:12.744 } 00:04:12.744 ], 00:04:12.744 "name": "Malloc3", 00:04:12.744 "num_blocks": 16384, 00:04:12.744 "product_name": "Malloc disk", 00:04:12.744 "supported_io_types": { 00:04:12.744 "abort": true, 00:04:12.744 "compare": false, 00:04:12.744 "compare_and_write": false, 00:04:12.744 "flush": true, 00:04:12.744 "nvme_admin": false, 00:04:12.744 "nvme_io": false, 00:04:12.744 "read": true, 00:04:12.744 "reset": true, 00:04:12.744 "unmap": true, 00:04:12.744 "write": true, 00:04:12.745 "write_zeroes": true 00:04:12.745 }, 00:04:12.745 "uuid": "3a6b8f74-3915-43b2-865c-83aff231ef9b", 00:04:12.745 "zoned": false 00:04:12.745 } 00:04:12.745 ]' 00:04:12.745 06:56:56 -- rpc/rpc.sh@17 -- # jq length 00:04:12.745 06:56:56 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:12.745 06:56:56 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:12.745 06:56:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:12.745 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.745 [2024-07-11 06:56:56.661759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:12.745 [2024-07-11 06:56:56.661823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:12.745 [2024-07-11 06:56:56.661864] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x194dbe0 00:04:12.745 [2024-07-11 06:56:56.661889] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:12.745 [2024-07-11 06:56:56.663305] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:12.745 [2024-07-11 06:56:56.663337] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:12.745 Passthru0 00:04:12.745 06:56:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:12.745 06:56:56 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:12.745 06:56:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:12.745 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.745 06:56:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:12.745 06:56:56 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:12.745 { 00:04:12.745 "aliases": [ 00:04:12.745 "3a6b8f74-3915-43b2-865c-83aff231ef9b" 00:04:12.745 ], 00:04:12.745 "assigned_rate_limits": { 00:04:12.745 "r_mbytes_per_sec": 0, 00:04:12.745 "rw_ios_per_sec": 0, 00:04:12.745 "rw_mbytes_per_sec": 0, 00:04:12.745 "w_mbytes_per_sec": 0 00:04:12.745 }, 00:04:12.745 "block_size": 512, 00:04:12.745 "claim_type": "exclusive_write", 00:04:12.745 "claimed": true, 00:04:12.745 "driver_specific": {}, 00:04:12.745 "memory_domains": [ 00:04:12.745 { 00:04:12.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.745 "dma_device_type": 2 00:04:12.745 } 00:04:12.745 ], 00:04:12.745 "name": "Malloc3", 00:04:12.745 "num_blocks": 16384, 00:04:12.745 "product_name": "Malloc disk", 00:04:12.745 "supported_io_types": { 00:04:12.745 "abort": true, 00:04:12.745 "compare": false, 00:04:12.745 "compare_and_write": false, 00:04:12.745 "flush": true, 00:04:12.745 "nvme_admin": false, 00:04:12.745 "nvme_io": false, 00:04:12.745 "read": true, 00:04:12.745 "reset": true, 00:04:12.745 "unmap": true, 00:04:12.745 "write": true, 00:04:12.745 "write_zeroes": true 00:04:12.745 }, 00:04:12.745 "uuid": "3a6b8f74-3915-43b2-865c-83aff231ef9b", 00:04:12.745 "zoned": false 00:04:12.745 }, 00:04:12.745 { 00:04:12.745 "aliases": [ 00:04:12.745 "67e038a2-3405-564f-8e02-441c8adf8500" 00:04:12.745 ], 00:04:12.745 "assigned_rate_limits": { 00:04:12.745 "r_mbytes_per_sec": 0, 00:04:12.745 "rw_ios_per_sec": 0, 00:04:12.745 "rw_mbytes_per_sec": 0, 00:04:12.745 "w_mbytes_per_sec": 0 00:04:12.745 }, 00:04:12.745 "block_size": 512, 00:04:12.745 "claimed": false, 00:04:12.745 "driver_specific": { 00:04:12.745 "passthru": { 00:04:12.745 "base_bdev_name": "Malloc3", 00:04:12.745 "name": "Passthru0" 00:04:12.745 } 00:04:12.745 }, 00:04:12.745 "memory_domains": [ 00:04:12.745 { 00:04:12.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.745 "dma_device_type": 2 00:04:12.745 } 00:04:12.745 ], 00:04:12.745 "name": "Passthru0", 00:04:12.745 "num_blocks": 16384, 00:04:12.745 "product_name": "passthru", 00:04:12.745 "supported_io_types": { 00:04:12.745 "abort": true, 00:04:12.745 "compare": false, 00:04:12.745 "compare_and_write": false, 00:04:12.745 "flush": true, 00:04:12.745 "nvme_admin": false, 00:04:12.745 "nvme_io": false, 00:04:12.745 "read": true, 00:04:12.745 "reset": true, 00:04:12.745 "unmap": true, 00:04:12.745 "write": true, 00:04:12.745 "write_zeroes": true 00:04:12.745 }, 00:04:12.745 "uuid": "67e038a2-3405-564f-8e02-441c8adf8500", 00:04:12.745 "zoned": false 00:04:12.745 } 00:04:12.745 ]' 00:04:12.745 06:56:56 -- rpc/rpc.sh@21 -- # jq length 00:04:12.745 06:56:56 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:12.745 06:56:56 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:12.745 06:56:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:12.745 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.745 06:56:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:12.745 06:56:56 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:12.745 06:56:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:12.745 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.745 06:56:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:12.745 06:56:56 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:12.745 06:56:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:12.745 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:12.745 06:56:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:12.745 06:56:56 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:12.745 06:56:56 -- rpc/rpc.sh@26 -- # jq length 00:04:13.004 06:56:56 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.004 00:04:13.004 real 0m0.326s 00:04:13.004 user 0m0.215s 00:04:13.004 sys 0m0.044s 00:04:13.004 ************************************ 00:04:13.004 06:56:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.004 06:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:13.004 END TEST rpc_daemon_integrity 00:04:13.004 ************************************ 00:04:13.004 06:56:56 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:13.004 06:56:56 -- rpc/rpc.sh@84 -- # killprocess 55475 00:04:13.004 06:56:56 -- common/autotest_common.sh@926 -- # '[' -z 55475 ']' 00:04:13.004 06:56:56 -- common/autotest_common.sh@930 -- # kill -0 55475 00:04:13.004 06:56:56 -- common/autotest_common.sh@931 -- # uname 00:04:13.004 06:56:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:13.004 06:56:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55475 00:04:13.004 06:56:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:13.004 06:56:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:13.004 killing process with pid 55475 00:04:13.004 06:56:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55475' 00:04:13.004 06:56:56 -- common/autotest_common.sh@945 -- # kill 55475 00:04:13.004 06:56:56 -- common/autotest_common.sh@950 -- # wait 55475 00:04:13.570 00:04:13.570 real 0m3.397s 00:04:13.570 user 0m4.300s 00:04:13.570 sys 0m0.905s 00:04:13.570 06:56:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.570 ************************************ 00:04:13.570 END TEST rpc 00:04:13.570 ************************************ 00:04:13.570 06:56:57 -- common/autotest_common.sh@10 -- # set +x 00:04:13.570 06:56:57 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:13.570 06:56:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:13.570 06:56:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.570 06:56:57 -- common/autotest_common.sh@10 -- # set +x 00:04:13.570 ************************************ 00:04:13.570 START TEST rpc_client 00:04:13.570 ************************************ 00:04:13.570 06:56:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:13.829 * Looking for test storage... 00:04:13.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:13.830 06:56:57 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:13.830 OK 00:04:13.830 06:56:57 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:13.830 00:04:13.830 real 0m0.102s 00:04:13.830 user 0m0.052s 00:04:13.830 sys 0m0.056s 00:04:13.830 06:56:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.830 ************************************ 00:04:13.830 06:56:57 -- common/autotest_common.sh@10 -- # set +x 00:04:13.830 END TEST rpc_client 00:04:13.830 ************************************ 00:04:13.830 06:56:57 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:13.830 06:56:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:13.830 06:56:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.830 06:56:57 -- common/autotest_common.sh@10 -- # set +x 00:04:13.830 ************************************ 00:04:13.830 START TEST json_config 00:04:13.830 ************************************ 00:04:13.830 06:56:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:13.830 06:56:57 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:13.830 06:56:57 -- nvmf/common.sh@7 -- # uname -s 00:04:13.830 06:56:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:13.830 06:56:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:13.830 06:56:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:13.830 06:56:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:13.830 06:56:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:13.830 06:56:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:13.830 06:56:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:13.830 06:56:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:13.830 06:56:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:13.830 06:56:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:13.830 06:56:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:04:13.830 06:56:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:04:13.830 06:56:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:13.830 06:56:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:13.830 06:56:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:13.830 06:56:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:13.830 06:56:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:13.830 06:56:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.830 06:56:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.830 06:56:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.830 06:56:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.830 06:56:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.830 06:56:57 -- paths/export.sh@5 -- # export PATH 00:04:13.830 06:56:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.830 06:56:57 -- nvmf/common.sh@46 -- # : 0 00:04:13.830 06:56:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:13.830 06:56:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:13.830 06:56:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:13.830 06:56:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:13.830 06:56:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:13.830 06:56:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:13.830 06:56:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:13.830 06:56:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:13.830 INFO: JSON configuration test init 00:04:13.830 06:56:57 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:13.830 06:56:57 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:13.830 06:56:57 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:13.830 06:56:57 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:13.830 06:56:57 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:13.830 06:56:57 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:13.830 06:56:57 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:13.830 06:56:57 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:13.830 06:56:57 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:13.830 06:56:57 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:13.830 06:56:57 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:13.830 06:56:57 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:13.830 06:56:57 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:13.830 06:56:57 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:13.830 06:56:57 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:13.830 06:56:57 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:13.830 06:56:57 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:13.830 06:56:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:13.830 06:56:57 -- common/autotest_common.sh@10 -- # set +x 00:04:13.830 06:56:57 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:13.830 06:56:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:13.830 06:56:57 -- common/autotest_common.sh@10 -- # set +x 00:04:13.830 06:56:57 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:13.830 06:56:57 -- json_config/json_config.sh@98 -- # local app=target 00:04:13.830 06:56:57 -- json_config/json_config.sh@99 -- # shift 00:04:13.830 06:56:57 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:13.830 06:56:57 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:13.830 06:56:57 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:13.830 06:56:57 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:13.830 06:56:57 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:13.830 06:56:57 -- json_config/json_config.sh@111 -- # app_pid[$app]=55781 00:04:13.830 Waiting for target to run... 00:04:13.830 06:56:57 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:13.830 06:56:57 -- json_config/json_config.sh@114 -- # waitforlisten 55781 /var/tmp/spdk_tgt.sock 00:04:13.830 06:56:57 -- common/autotest_common.sh@819 -- # '[' -z 55781 ']' 00:04:13.830 06:56:57 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:13.830 06:56:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:13.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:13.830 06:56:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:13.830 06:56:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:13.830 06:56:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:13.830 06:56:57 -- common/autotest_common.sh@10 -- # set +x 00:04:14.089 [2024-07-11 06:56:57.894625] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:14.089 [2024-07-11 06:56:57.894730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55781 ] 00:04:14.348 [2024-07-11 06:56:58.363652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.607 [2024-07-11 06:56:58.487728] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:14.607 [2024-07-11 06:56:58.487939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.866 06:56:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:14.866 00:04:14.866 06:56:58 -- common/autotest_common.sh@852 -- # return 0 00:04:14.866 06:56:58 -- json_config/json_config.sh@115 -- # echo '' 00:04:14.866 06:56:58 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:14.866 06:56:58 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:14.866 06:56:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:14.866 06:56:58 -- common/autotest_common.sh@10 -- # set +x 00:04:14.866 06:56:58 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:14.866 06:56:58 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:14.866 06:56:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:14.866 06:56:58 -- common/autotest_common.sh@10 -- # set +x 00:04:14.866 06:56:58 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:14.866 06:56:58 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:14.866 06:56:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:15.435 06:56:59 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:15.435 06:56:59 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:15.435 06:56:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:15.435 06:56:59 -- common/autotest_common.sh@10 -- # set +x 00:04:15.435 06:56:59 -- json_config/json_config.sh@48 -- # local ret=0 00:04:15.435 06:56:59 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:15.435 06:56:59 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:15.435 06:56:59 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:15.435 06:56:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:15.435 06:56:59 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:15.695 06:56:59 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:15.695 06:56:59 -- json_config/json_config.sh@51 -- # local get_types 00:04:15.695 06:56:59 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:15.695 06:56:59 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:15.695 06:56:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:15.695 06:56:59 -- common/autotest_common.sh@10 -- # set +x 00:04:15.695 06:56:59 -- json_config/json_config.sh@58 -- # return 0 00:04:15.695 06:56:59 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:15.695 06:56:59 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:15.695 06:56:59 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:15.695 06:56:59 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:15.695 06:56:59 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:15.695 06:56:59 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:15.695 06:56:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:15.695 06:56:59 -- common/autotest_common.sh@10 -- # set +x 00:04:15.695 06:56:59 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:15.695 06:56:59 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:15.695 06:56:59 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:15.695 06:56:59 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.695 06:56:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.953 MallocForNvmf0 00:04:15.953 06:57:00 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.953 06:57:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:16.211 MallocForNvmf1 00:04:16.211 06:57:00 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:16.211 06:57:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:16.470 [2024-07-11 06:57:00.463250] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.470 06:57:00 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:16.470 06:57:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:16.729 06:57:00 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.729 06:57:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.987 06:57:00 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:16.988 06:57:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:17.246 06:57:01 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:17.246 06:57:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:17.505 [2024-07-11 06:57:01.524318] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:17.505 06:57:01 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:17.505 06:57:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:17.505 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:04:17.765 06:57:01 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:17.765 06:57:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:17.765 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:04:17.765 06:57:01 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:17.765 06:57:01 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:17.765 06:57:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.024 MallocBdevForConfigChangeCheck 00:04:18.024 06:57:01 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:18.024 06:57:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:18.024 06:57:01 -- common/autotest_common.sh@10 -- # set +x 00:04:18.024 06:57:01 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:18.024 06:57:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.655 INFO: shutting down applications... 00:04:18.655 06:57:02 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:18.655 06:57:02 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:18.655 06:57:02 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:18.655 06:57:02 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:18.655 06:57:02 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:18.655 Calling clear_iscsi_subsystem 00:04:18.655 Calling clear_nvmf_subsystem 00:04:18.655 Calling clear_nbd_subsystem 00:04:18.655 Calling clear_ublk_subsystem 00:04:18.655 Calling clear_vhost_blk_subsystem 00:04:18.655 Calling clear_vhost_scsi_subsystem 00:04:18.655 Calling clear_scheduler_subsystem 00:04:18.655 Calling clear_bdev_subsystem 00:04:18.655 Calling clear_accel_subsystem 00:04:18.655 Calling clear_vmd_subsystem 00:04:18.655 Calling clear_sock_subsystem 00:04:18.655 Calling clear_iobuf_subsystem 00:04:18.913 06:57:02 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:18.913 06:57:02 -- json_config/json_config.sh@396 -- # count=100 00:04:18.913 06:57:02 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:18.914 06:57:02 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.914 06:57:02 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:18.914 06:57:02 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:19.171 06:57:03 -- json_config/json_config.sh@398 -- # break 00:04:19.171 06:57:03 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:19.171 06:57:03 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:19.171 06:57:03 -- json_config/json_config.sh@120 -- # local app=target 00:04:19.171 06:57:03 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:19.171 06:57:03 -- json_config/json_config.sh@124 -- # [[ -n 55781 ]] 00:04:19.171 06:57:03 -- json_config/json_config.sh@127 -- # kill -SIGINT 55781 00:04:19.171 06:57:03 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:19.171 06:57:03 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:19.171 06:57:03 -- json_config/json_config.sh@130 -- # kill -0 55781 00:04:19.171 06:57:03 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:19.737 06:57:03 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:19.737 06:57:03 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:19.737 06:57:03 -- json_config/json_config.sh@130 -- # kill -0 55781 00:04:19.737 06:57:03 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:19.737 06:57:03 -- json_config/json_config.sh@132 -- # break 00:04:19.737 06:57:03 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:19.737 SPDK target shutdown done 00:04:19.737 06:57:03 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:19.737 INFO: relaunching applications... 00:04:19.737 06:57:03 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:19.737 06:57:03 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:19.737 06:57:03 -- json_config/json_config.sh@98 -- # local app=target 00:04:19.737 06:57:03 -- json_config/json_config.sh@99 -- # shift 00:04:19.737 06:57:03 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:19.737 06:57:03 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:19.737 06:57:03 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:19.737 06:57:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:19.737 06:57:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:19.737 06:57:03 -- json_config/json_config.sh@111 -- # app_pid[$app]=56061 00:04:19.737 Waiting for target to run... 00:04:19.737 06:57:03 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:19.737 06:57:03 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:19.737 06:57:03 -- json_config/json_config.sh@114 -- # waitforlisten 56061 /var/tmp/spdk_tgt.sock 00:04:19.737 06:57:03 -- common/autotest_common.sh@819 -- # '[' -z 56061 ']' 00:04:19.737 06:57:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.737 06:57:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:19.737 06:57:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.737 06:57:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:19.737 06:57:03 -- common/autotest_common.sh@10 -- # set +x 00:04:19.737 [2024-07-11 06:57:03.707125] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:19.737 [2024-07-11 06:57:03.707265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56061 ] 00:04:20.304 [2024-07-11 06:57:04.128161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.304 [2024-07-11 06:57:04.243269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:20.304 [2024-07-11 06:57:04.243478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.638 [2024-07-11 06:57:04.566867] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.638 [2024-07-11 06:57:04.598983] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:21.572 06:57:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:21.572 06:57:05 -- common/autotest_common.sh@852 -- # return 0 00:04:21.572 00:04:21.572 06:57:05 -- json_config/json_config.sh@115 -- # echo '' 00:04:21.572 06:57:05 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:21.572 INFO: Checking if target configuration is the same... 00:04:21.572 06:57:05 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:21.572 06:57:05 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.572 06:57:05 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:21.572 06:57:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.572 + '[' 2 -ne 2 ']' 00:04:21.572 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:21.572 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:21.572 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:21.572 +++ basename /dev/fd/62 00:04:21.572 ++ mktemp /tmp/62.XXX 00:04:21.572 + tmp_file_1=/tmp/62.mDI 00:04:21.572 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.572 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:21.572 + tmp_file_2=/tmp/spdk_tgt_config.json.hqm 00:04:21.572 + ret=0 00:04:21.572 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.830 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.830 + diff -u /tmp/62.mDI /tmp/spdk_tgt_config.json.hqm 00:04:21.830 INFO: JSON config files are the same 00:04:21.830 + echo 'INFO: JSON config files are the same' 00:04:21.830 + rm /tmp/62.mDI /tmp/spdk_tgt_config.json.hqm 00:04:21.830 + exit 0 00:04:21.830 06:57:05 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:21.830 INFO: changing configuration and checking if this can be detected... 00:04:21.830 06:57:05 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:21.830 06:57:05 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:21.830 06:57:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:22.088 06:57:05 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:22.088 06:57:05 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:22.088 06:57:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.088 + '[' 2 -ne 2 ']' 00:04:22.088 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:22.088 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:22.088 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:22.088 +++ basename /dev/fd/62 00:04:22.088 ++ mktemp /tmp/62.XXX 00:04:22.088 + tmp_file_1=/tmp/62.F0h 00:04:22.088 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:22.088 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:22.088 + tmp_file_2=/tmp/spdk_tgt_config.json.XNB 00:04:22.088 + ret=0 00:04:22.088 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:22.346 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:22.604 + diff -u /tmp/62.F0h /tmp/spdk_tgt_config.json.XNB 00:04:22.604 + ret=1 00:04:22.604 + echo '=== Start of file: /tmp/62.F0h ===' 00:04:22.604 + cat /tmp/62.F0h 00:04:22.604 + echo '=== End of file: /tmp/62.F0h ===' 00:04:22.604 + echo '' 00:04:22.604 + echo '=== Start of file: /tmp/spdk_tgt_config.json.XNB ===' 00:04:22.604 + cat /tmp/spdk_tgt_config.json.XNB 00:04:22.604 + echo '=== End of file: /tmp/spdk_tgt_config.json.XNB ===' 00:04:22.604 + echo '' 00:04:22.604 + rm /tmp/62.F0h /tmp/spdk_tgt_config.json.XNB 00:04:22.604 + exit 1 00:04:22.604 INFO: configuration change detected. 00:04:22.604 06:57:06 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:22.604 06:57:06 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:22.604 06:57:06 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:22.604 06:57:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:22.605 06:57:06 -- common/autotest_common.sh@10 -- # set +x 00:04:22.605 06:57:06 -- json_config/json_config.sh@360 -- # local ret=0 00:04:22.605 06:57:06 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:22.605 06:57:06 -- json_config/json_config.sh@370 -- # [[ -n 56061 ]] 00:04:22.605 06:57:06 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:22.605 06:57:06 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:22.605 06:57:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:22.605 06:57:06 -- common/autotest_common.sh@10 -- # set +x 00:04:22.605 06:57:06 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:22.605 06:57:06 -- json_config/json_config.sh@246 -- # uname -s 00:04:22.605 06:57:06 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:22.605 06:57:06 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:22.605 06:57:06 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:22.605 06:57:06 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:22.605 06:57:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:22.605 06:57:06 -- common/autotest_common.sh@10 -- # set +x 00:04:22.605 06:57:06 -- json_config/json_config.sh@376 -- # killprocess 56061 00:04:22.605 06:57:06 -- common/autotest_common.sh@926 -- # '[' -z 56061 ']' 00:04:22.605 06:57:06 -- common/autotest_common.sh@930 -- # kill -0 56061 00:04:22.605 06:57:06 -- common/autotest_common.sh@931 -- # uname 00:04:22.605 06:57:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:22.605 06:57:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56061 00:04:22.605 06:57:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:22.605 06:57:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:22.605 06:57:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56061' 00:04:22.605 killing process with pid 56061 00:04:22.605 06:57:06 -- common/autotest_common.sh@945 -- # kill 56061 00:04:22.605 06:57:06 -- common/autotest_common.sh@950 -- # wait 56061 00:04:23.172 06:57:06 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:23.172 06:57:06 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:23.172 06:57:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:23.172 06:57:06 -- common/autotest_common.sh@10 -- # set +x 00:04:23.172 06:57:06 -- json_config/json_config.sh@381 -- # return 0 00:04:23.172 INFO: Success 00:04:23.172 06:57:06 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:23.172 00:04:23.172 real 0m9.242s 00:04:23.172 user 0m13.028s 00:04:23.172 sys 0m2.072s 00:04:23.172 06:57:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.172 06:57:06 -- common/autotest_common.sh@10 -- # set +x 00:04:23.172 ************************************ 00:04:23.172 END TEST json_config 00:04:23.172 ************************************ 00:04:23.172 06:57:07 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:23.172 06:57:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:23.172 06:57:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:23.172 06:57:07 -- common/autotest_common.sh@10 -- # set +x 00:04:23.172 ************************************ 00:04:23.172 START TEST json_config_extra_key 00:04:23.172 ************************************ 00:04:23.172 06:57:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:23.172 06:57:07 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:23.172 06:57:07 -- nvmf/common.sh@7 -- # uname -s 00:04:23.172 06:57:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.172 06:57:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.172 06:57:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.172 06:57:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.172 06:57:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.172 06:57:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.172 06:57:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.172 06:57:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.172 06:57:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.172 06:57:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.172 06:57:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:04:23.172 06:57:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:04:23.172 06:57:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.172 06:57:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.172 06:57:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:23.172 06:57:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:23.172 06:57:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.172 06:57:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.172 06:57:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.172 06:57:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.172 06:57:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.172 06:57:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.172 06:57:07 -- paths/export.sh@5 -- # export PATH 00:04:23.172 06:57:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.172 06:57:07 -- nvmf/common.sh@46 -- # : 0 00:04:23.172 06:57:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:23.172 06:57:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:23.172 06:57:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:23.172 06:57:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.172 06:57:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.172 06:57:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:23.172 06:57:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:23.172 06:57:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:23.172 06:57:07 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:23.172 06:57:07 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:23.172 06:57:07 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:23.172 06:57:07 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:23.172 06:57:07 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:23.172 06:57:07 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:23.172 06:57:07 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:23.172 06:57:07 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:23.172 06:57:07 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:23.172 INFO: launching applications... 00:04:23.173 06:57:07 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:23.173 06:57:07 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:23.173 06:57:07 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:23.173 06:57:07 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:23.173 06:57:07 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:23.173 06:57:07 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:23.173 06:57:07 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56244 00:04:23.173 Waiting for target to run... 00:04:23.173 06:57:07 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:23.173 06:57:07 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56244 /var/tmp/spdk_tgt.sock 00:04:23.173 06:57:07 -- common/autotest_common.sh@819 -- # '[' -z 56244 ']' 00:04:23.173 06:57:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:23.173 06:57:07 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:23.173 06:57:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:23.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:23.173 06:57:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:23.173 06:57:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:23.173 06:57:07 -- common/autotest_common.sh@10 -- # set +x 00:04:23.173 [2024-07-11 06:57:07.148025] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:23.173 [2024-07-11 06:57:07.148136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56244 ] 00:04:23.740 [2024-07-11 06:57:07.563299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.740 [2024-07-11 06:57:07.671682] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:23.740 [2024-07-11 06:57:07.671867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.307 06:57:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:24.307 00:04:24.307 06:57:08 -- common/autotest_common.sh@852 -- # return 0 00:04:24.307 06:57:08 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:24.307 INFO: shutting down applications... 00:04:24.307 06:57:08 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:24.307 06:57:08 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:24.307 06:57:08 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:24.307 06:57:08 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:24.307 06:57:08 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56244 ]] 00:04:24.307 06:57:08 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56244 00:04:24.307 06:57:08 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:24.307 06:57:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:24.307 06:57:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56244 00:04:24.307 06:57:08 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:24.566 06:57:08 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:24.566 06:57:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:24.566 06:57:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56244 00:04:24.566 06:57:08 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:25.134 06:57:09 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:25.134 06:57:09 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:25.134 06:57:09 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56244 00:04:25.134 06:57:09 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:25.134 06:57:09 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:25.134 06:57:09 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:25.134 SPDK target shutdown done 00:04:25.134 06:57:09 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:25.134 Success 00:04:25.134 06:57:09 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:25.134 00:04:25.134 real 0m2.080s 00:04:25.134 user 0m1.646s 00:04:25.134 sys 0m0.432s 00:04:25.134 06:57:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.134 06:57:09 -- common/autotest_common.sh@10 -- # set +x 00:04:25.134 ************************************ 00:04:25.134 END TEST json_config_extra_key 00:04:25.134 ************************************ 00:04:25.134 06:57:09 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:25.134 06:57:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.134 06:57:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.134 06:57:09 -- common/autotest_common.sh@10 -- # set +x 00:04:25.134 ************************************ 00:04:25.134 START TEST alias_rpc 00:04:25.134 ************************************ 00:04:25.134 06:57:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:25.392 * Looking for test storage... 00:04:25.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:25.392 06:57:09 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:25.392 06:57:09 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56326 00:04:25.392 06:57:09 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.392 06:57:09 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56326 00:04:25.392 06:57:09 -- common/autotest_common.sh@819 -- # '[' -z 56326 ']' 00:04:25.392 06:57:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.392 06:57:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:25.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.392 06:57:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.392 06:57:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:25.392 06:57:09 -- common/autotest_common.sh@10 -- # set +x 00:04:25.392 [2024-07-11 06:57:09.299962] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:25.392 [2024-07-11 06:57:09.300066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56326 ] 00:04:25.392 [2024-07-11 06:57:09.436600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.650 [2024-07-11 06:57:09.534563] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:25.650 [2024-07-11 06:57:09.534723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.217 06:57:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:26.217 06:57:10 -- common/autotest_common.sh@852 -- # return 0 00:04:26.217 06:57:10 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:26.476 06:57:10 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56326 00:04:26.476 06:57:10 -- common/autotest_common.sh@926 -- # '[' -z 56326 ']' 00:04:26.476 06:57:10 -- common/autotest_common.sh@930 -- # kill -0 56326 00:04:26.476 06:57:10 -- common/autotest_common.sh@931 -- # uname 00:04:26.476 06:57:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:26.476 06:57:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56326 00:04:26.476 06:57:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:26.476 06:57:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:26.476 killing process with pid 56326 00:04:26.476 06:57:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56326' 00:04:26.476 06:57:10 -- common/autotest_common.sh@945 -- # kill 56326 00:04:26.476 06:57:10 -- common/autotest_common.sh@950 -- # wait 56326 00:04:27.043 00:04:27.043 real 0m1.882s 00:04:27.043 user 0m1.953s 00:04:27.043 sys 0m0.514s 00:04:27.043 06:57:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.043 06:57:11 -- common/autotest_common.sh@10 -- # set +x 00:04:27.043 ************************************ 00:04:27.043 END TEST alias_rpc 00:04:27.043 ************************************ 00:04:27.043 06:57:11 -- spdk/autotest.sh@182 -- # [[ 1 -eq 0 ]] 00:04:27.043 06:57:11 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.043 06:57:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.043 06:57:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.043 06:57:11 -- common/autotest_common.sh@10 -- # set +x 00:04:27.043 ************************************ 00:04:27.043 START TEST dpdk_mem_utility 00:04:27.043 ************************************ 00:04:27.043 06:57:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.301 * Looking for test storage... 00:04:27.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:27.301 06:57:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:27.301 06:57:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56417 00:04:27.301 06:57:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.301 06:57:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56417 00:04:27.301 06:57:11 -- common/autotest_common.sh@819 -- # '[' -z 56417 ']' 00:04:27.301 06:57:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.301 06:57:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:27.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.301 06:57:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.301 06:57:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:27.301 06:57:11 -- common/autotest_common.sh@10 -- # set +x 00:04:27.301 [2024-07-11 06:57:11.247492] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:27.301 [2024-07-11 06:57:11.247591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56417 ] 00:04:27.558 [2024-07-11 06:57:11.383934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.558 [2024-07-11 06:57:11.505710] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:27.558 [2024-07-11 06:57:11.505933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.490 06:57:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:28.490 06:57:12 -- common/autotest_common.sh@852 -- # return 0 00:04:28.490 06:57:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:28.490 06:57:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:28.490 06:57:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:28.490 06:57:12 -- common/autotest_common.sh@10 -- # set +x 00:04:28.490 { 00:04:28.490 "filename": "/tmp/spdk_mem_dump.txt" 00:04:28.490 } 00:04:28.490 06:57:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:28.490 06:57:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:28.490 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:28.490 1 heaps totaling size 814.000000 MiB 00:04:28.490 size: 814.000000 MiB heap id: 0 00:04:28.490 end heaps---------- 00:04:28.490 8 mempools totaling size 598.116089 MiB 00:04:28.490 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:28.490 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:28.490 size: 84.521057 MiB name: bdev_io_56417 00:04:28.490 size: 51.011292 MiB name: evtpool_56417 00:04:28.490 size: 50.003479 MiB name: msgpool_56417 00:04:28.490 size: 21.763794 MiB name: PDU_Pool 00:04:28.490 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:28.490 size: 0.026123 MiB name: Session_Pool 00:04:28.490 end mempools------- 00:04:28.490 6 memzones totaling size 4.142822 MiB 00:04:28.490 size: 1.000366 MiB name: RG_ring_0_56417 00:04:28.490 size: 1.000366 MiB name: RG_ring_1_56417 00:04:28.490 size: 1.000366 MiB name: RG_ring_4_56417 00:04:28.490 size: 1.000366 MiB name: RG_ring_5_56417 00:04:28.490 size: 0.125366 MiB name: RG_ring_2_56417 00:04:28.490 size: 0.015991 MiB name: RG_ring_3_56417 00:04:28.490 end memzones------- 00:04:28.490 06:57:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:28.490 heap id: 0 total size: 814.000000 MiB number of busy elements: 211 number of free elements: 15 00:04:28.490 list of free elements. size: 12.488220 MiB 00:04:28.490 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:28.490 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:28.490 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:28.490 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:28.490 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:28.490 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:28.490 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:28.490 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:28.490 element at address: 0x200000200000 with size: 0.837219 MiB 00:04:28.490 element at address: 0x20001aa00000 with size: 0.572815 MiB 00:04:28.490 element at address: 0x20000b200000 with size: 0.489807 MiB 00:04:28.490 element at address: 0x200000800000 with size: 0.487061 MiB 00:04:28.490 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:28.490 element at address: 0x200027e00000 with size: 0.399048 MiB 00:04:28.490 element at address: 0x200003a00000 with size: 0.351501 MiB 00:04:28.490 list of standard malloc elements. size: 199.249207 MiB 00:04:28.490 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:28.490 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:28.490 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:28.490 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:28.490 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:28.490 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:28.490 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:28.490 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:28.490 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:28.490 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:28.490 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:28.490 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:28.490 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:28.490 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:28.490 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:28.491 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e66280 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e66340 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6cf40 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:28.491 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:28.492 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:28.492 list of memzone associated elements. size: 602.262573 MiB 00:04:28.492 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:28.492 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:28.492 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:28.492 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:28.492 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:28.492 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56417_0 00:04:28.492 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:28.492 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56417_0 00:04:28.492 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:28.492 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56417_0 00:04:28.492 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:28.492 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:28.492 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:28.492 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:28.492 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:28.492 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56417 00:04:28.492 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:28.492 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56417 00:04:28.492 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:28.492 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56417 00:04:28.492 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:28.492 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:28.492 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:28.492 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:28.492 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:28.492 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:28.492 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:28.492 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:28.492 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:28.492 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56417 00:04:28.492 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:28.492 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56417 00:04:28.492 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:28.492 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56417 00:04:28.492 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:28.492 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56417 00:04:28.492 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:28.492 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56417 00:04:28.492 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:28.492 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:28.492 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:28.492 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:28.492 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:28.492 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:28.492 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:28.492 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56417 00:04:28.492 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:28.492 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:28.492 element at address: 0x200027e66400 with size: 0.023743 MiB 00:04:28.492 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:28.492 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:28.492 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56417 00:04:28.492 element at address: 0x200027e6c540 with size: 0.002441 MiB 00:04:28.492 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:28.492 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:28.492 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56417 00:04:28.492 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:28.492 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56417 00:04:28.492 element at address: 0x200027e6d000 with size: 0.000305 MiB 00:04:28.492 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:28.492 06:57:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:28.492 06:57:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56417 00:04:28.492 06:57:12 -- common/autotest_common.sh@926 -- # '[' -z 56417 ']' 00:04:28.492 06:57:12 -- common/autotest_common.sh@930 -- # kill -0 56417 00:04:28.492 06:57:12 -- common/autotest_common.sh@931 -- # uname 00:04:28.492 06:57:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:28.492 06:57:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56417 00:04:28.492 06:57:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:28.492 06:57:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:28.492 killing process with pid 56417 00:04:28.492 06:57:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56417' 00:04:28.492 06:57:12 -- common/autotest_common.sh@945 -- # kill 56417 00:04:28.492 06:57:12 -- common/autotest_common.sh@950 -- # wait 56417 00:04:29.056 ************************************ 00:04:29.056 END TEST dpdk_mem_utility 00:04:29.056 ************************************ 00:04:29.056 00:04:29.056 real 0m1.902s 00:04:29.056 user 0m2.038s 00:04:29.056 sys 0m0.495s 00:04:29.056 06:57:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.056 06:57:12 -- common/autotest_common.sh@10 -- # set +x 00:04:29.056 06:57:13 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:29.056 06:57:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.056 06:57:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.056 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:04:29.056 ************************************ 00:04:29.056 START TEST event 00:04:29.056 ************************************ 00:04:29.056 06:57:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:29.314 * Looking for test storage... 00:04:29.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:29.314 06:57:13 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:29.314 06:57:13 -- bdev/nbd_common.sh@6 -- # set -e 00:04:29.314 06:57:13 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.314 06:57:13 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:29.314 06:57:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.314 06:57:13 -- common/autotest_common.sh@10 -- # set +x 00:04:29.314 ************************************ 00:04:29.314 START TEST event_perf 00:04:29.314 ************************************ 00:04:29.314 06:57:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.314 Running I/O for 1 seconds...[2024-07-11 06:57:13.158129] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:29.314 [2024-07-11 06:57:13.158398] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56511 ] 00:04:29.314 [2024-07-11 06:57:13.294116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.571 [2024-07-11 06:57:13.392424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.571 Running I/O for 1 seconds...[2024-07-11 06:57:13.392578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.571 [2024-07-11 06:57:13.392719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.571 [2024-07-11 06:57:13.392720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.501 00:04:30.501 lcore 0: 118098 00:04:30.501 lcore 1: 118098 00:04:30.501 lcore 2: 118100 00:04:30.501 lcore 3: 118099 00:04:30.501 done. 00:04:30.501 00:04:30.501 real 0m1.386s 00:04:30.501 user 0m4.198s 00:04:30.501 sys 0m0.073s 00:04:30.501 06:57:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.501 06:57:14 -- common/autotest_common.sh@10 -- # set +x 00:04:30.501 ************************************ 00:04:30.501 END TEST event_perf 00:04:30.501 ************************************ 00:04:30.759 06:57:14 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:30.759 06:57:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:30.759 06:57:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.759 06:57:14 -- common/autotest_common.sh@10 -- # set +x 00:04:30.759 ************************************ 00:04:30.759 START TEST event_reactor 00:04:30.759 ************************************ 00:04:30.759 06:57:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:30.759 [2024-07-11 06:57:14.594259] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:30.759 [2024-07-11 06:57:14.594349] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56544 ] 00:04:30.759 [2024-07-11 06:57:14.730347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.016 [2024-07-11 06:57:14.843076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.951 test_start 00:04:31.951 oneshot 00:04:31.951 tick 100 00:04:31.951 tick 100 00:04:31.951 tick 250 00:04:31.951 tick 100 00:04:31.951 tick 100 00:04:31.951 tick 100 00:04:31.951 tick 250 00:04:31.951 tick 500 00:04:31.951 tick 100 00:04:31.951 tick 100 00:04:31.951 tick 250 00:04:31.951 tick 100 00:04:31.951 tick 100 00:04:31.951 test_end 00:04:31.951 00:04:31.951 real 0m1.404s 00:04:31.951 user 0m1.238s 00:04:31.951 sys 0m0.060s 00:04:31.951 ************************************ 00:04:31.951 END TEST event_reactor 00:04:31.951 ************************************ 00:04:31.951 06:57:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.951 06:57:15 -- common/autotest_common.sh@10 -- # set +x 00:04:32.210 06:57:16 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:32.210 06:57:16 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:32.210 06:57:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.210 06:57:16 -- common/autotest_common.sh@10 -- # set +x 00:04:32.210 ************************************ 00:04:32.210 START TEST event_reactor_perf 00:04:32.210 ************************************ 00:04:32.210 06:57:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:32.210 [2024-07-11 06:57:16.050506] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:32.210 [2024-07-11 06:57:16.050606] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56585 ] 00:04:32.210 [2024-07-11 06:57:16.188318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.468 [2024-07-11 06:57:16.304286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.433 test_start 00:04:33.433 test_end 00:04:33.433 Performance: 414263 events per second 00:04:33.433 00:04:33.433 real 0m1.417s 00:04:33.433 user 0m1.242s 00:04:33.433 sys 0m0.069s 00:04:33.433 06:57:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.433 ************************************ 00:04:33.433 06:57:17 -- common/autotest_common.sh@10 -- # set +x 00:04:33.433 END TEST event_reactor_perf 00:04:33.433 ************************************ 00:04:33.691 06:57:17 -- event/event.sh@49 -- # uname -s 00:04:33.691 06:57:17 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:33.691 06:57:17 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:33.691 06:57:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.691 06:57:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.691 06:57:17 -- common/autotest_common.sh@10 -- # set +x 00:04:33.691 ************************************ 00:04:33.691 START TEST event_scheduler 00:04:33.691 ************************************ 00:04:33.691 06:57:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:33.691 * Looking for test storage... 00:04:33.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:33.691 06:57:17 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:33.691 06:57:17 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56640 00:04:33.691 06:57:17 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:33.691 06:57:17 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.691 06:57:17 -- scheduler/scheduler.sh@37 -- # waitforlisten 56640 00:04:33.691 06:57:17 -- common/autotest_common.sh@819 -- # '[' -z 56640 ']' 00:04:33.691 06:57:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.691 06:57:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:33.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.691 06:57:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.691 06:57:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:33.691 06:57:17 -- common/autotest_common.sh@10 -- # set +x 00:04:33.691 [2024-07-11 06:57:17.641288] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:33.691 [2024-07-11 06:57:17.641369] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56640 ] 00:04:33.949 [2024-07-11 06:57:17.786160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:33.949 [2024-07-11 06:57:17.879208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.949 [2024-07-11 06:57:17.879482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.950 [2024-07-11 06:57:17.879481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:33.950 [2024-07-11 06:57:17.879331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.885 06:57:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:34.885 06:57:18 -- common/autotest_common.sh@852 -- # return 0 00:04:34.885 06:57:18 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:34.885 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.885 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.885 POWER: Env isn't set yet! 00:04:34.885 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:34.885 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.885 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.885 POWER: Attempting to initialise PSTAT power management... 00:04:34.885 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.885 POWER: Cannot set governor of lcore 0 to performance 00:04:34.885 POWER: Attempting to initialise AMD PSTATE power management... 00:04:34.885 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.885 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.885 POWER: Attempting to initialise CPPC power management... 00:04:34.885 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.885 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.885 POWER: Attempting to initialise VM power management... 00:04:34.885 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:34.886 POWER: Unable to set Power Management Environment for lcore 0 00:04:34.886 [2024-07-11 06:57:18.641500] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:34.886 [2024-07-11 06:57:18.641515] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:34.886 [2024-07-11 06:57:18.641523] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:34.886 [2024-07-11 06:57:18.641536] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:34.886 [2024-07-11 06:57:18.641544] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:34.886 [2024-07-11 06:57:18.641551] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 [2024-07-11 06:57:18.731658] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:34.886 06:57:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.886 06:57:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 ************************************ 00:04:34.886 START TEST scheduler_create_thread 00:04:34.886 ************************************ 00:04:34.886 06:57:18 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 2 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 3 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 4 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 5 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 6 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 7 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 8 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 9 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 10 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.886 06:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:34.886 06:57:18 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:34.886 06:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.886 06:57:18 -- common/autotest_common.sh@10 -- # set +x 00:04:36.261 ************************************ 00:04:36.261 END TEST scheduler_create_thread 00:04:36.261 ************************************ 00:04:36.261 06:57:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.261 00:04:36.261 real 0m1.171s 00:04:36.261 user 0m0.017s 00:04:36.261 sys 0m0.006s 00:04:36.261 06:57:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.261 06:57:19 -- common/autotest_common.sh@10 -- # set +x 00:04:36.261 06:57:19 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:36.261 06:57:19 -- scheduler/scheduler.sh@46 -- # killprocess 56640 00:04:36.261 06:57:19 -- common/autotest_common.sh@926 -- # '[' -z 56640 ']' 00:04:36.261 06:57:19 -- common/autotest_common.sh@930 -- # kill -0 56640 00:04:36.261 06:57:19 -- common/autotest_common.sh@931 -- # uname 00:04:36.261 06:57:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:36.261 06:57:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56640 00:04:36.261 killing process with pid 56640 00:04:36.261 06:57:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:36.261 06:57:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:36.261 06:57:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56640' 00:04:36.261 06:57:19 -- common/autotest_common.sh@945 -- # kill 56640 00:04:36.261 06:57:19 -- common/autotest_common.sh@950 -- # wait 56640 00:04:36.520 [2024-07-11 06:57:20.394127] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:36.779 00:04:36.779 real 0m3.124s 00:04:36.779 user 0m5.716s 00:04:36.779 sys 0m0.361s 00:04:36.779 06:57:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.779 06:57:20 -- common/autotest_common.sh@10 -- # set +x 00:04:36.779 ************************************ 00:04:36.779 END TEST event_scheduler 00:04:36.779 ************************************ 00:04:36.779 06:57:20 -- event/event.sh@51 -- # modprobe -n nbd 00:04:36.779 06:57:20 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:36.779 06:57:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.779 06:57:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.779 06:57:20 -- common/autotest_common.sh@10 -- # set +x 00:04:36.779 ************************************ 00:04:36.779 START TEST app_repeat 00:04:36.779 ************************************ 00:04:36.779 06:57:20 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:04:36.779 06:57:20 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.779 06:57:20 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.779 06:57:20 -- event/event.sh@13 -- # local nbd_list 00:04:36.779 06:57:20 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.779 06:57:20 -- event/event.sh@14 -- # local bdev_list 00:04:36.779 06:57:20 -- event/event.sh@15 -- # local repeat_times=4 00:04:36.779 06:57:20 -- event/event.sh@17 -- # modprobe nbd 00:04:36.779 Process app_repeat pid: 56741 00:04:36.779 spdk_app_start Round 0 00:04:36.779 06:57:20 -- event/event.sh@19 -- # repeat_pid=56741 00:04:36.779 06:57:20 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.779 06:57:20 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:36.779 06:57:20 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 56741' 00:04:36.779 06:57:20 -- event/event.sh@23 -- # for i in {0..2} 00:04:36.779 06:57:20 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:36.779 06:57:20 -- event/event.sh@25 -- # waitforlisten 56741 /var/tmp/spdk-nbd.sock 00:04:36.779 06:57:20 -- common/autotest_common.sh@819 -- # '[' -z 56741 ']' 00:04:36.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.779 06:57:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.779 06:57:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:36.779 06:57:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.779 06:57:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:36.779 06:57:20 -- common/autotest_common.sh@10 -- # set +x 00:04:36.779 [2024-07-11 06:57:20.720720] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:36.779 [2024-07-11 06:57:20.720808] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56741 ] 00:04:37.039 [2024-07-11 06:57:20.859512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.039 [2024-07-11 06:57:20.983739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.039 [2024-07-11 06:57:20.983751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.608 06:57:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:37.608 06:57:21 -- common/autotest_common.sh@852 -- # return 0 00:04:37.608 06:57:21 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.867 Malloc0 00:04:38.125 06:57:21 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.384 Malloc1 00:04:38.385 06:57:22 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@12 -- # local i 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.385 06:57:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:38.643 /dev/nbd0 00:04:38.643 06:57:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:38.643 06:57:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:38.643 06:57:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:38.643 06:57:22 -- common/autotest_common.sh@857 -- # local i 00:04:38.643 06:57:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:38.643 06:57:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:38.643 06:57:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:38.643 06:57:22 -- common/autotest_common.sh@861 -- # break 00:04:38.643 06:57:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:38.643 06:57:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:38.643 06:57:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.643 1+0 records in 00:04:38.643 1+0 records out 00:04:38.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449503 s, 9.1 MB/s 00:04:38.643 06:57:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:38.643 06:57:22 -- common/autotest_common.sh@874 -- # size=4096 00:04:38.643 06:57:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:38.643 06:57:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:38.643 06:57:22 -- common/autotest_common.sh@877 -- # return 0 00:04:38.643 06:57:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.643 06:57:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.643 06:57:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:38.903 /dev/nbd1 00:04:38.903 06:57:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:38.903 06:57:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:38.903 06:57:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:38.903 06:57:22 -- common/autotest_common.sh@857 -- # local i 00:04:38.903 06:57:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:38.903 06:57:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:38.903 06:57:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:38.903 06:57:22 -- common/autotest_common.sh@861 -- # break 00:04:38.903 06:57:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:38.903 06:57:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:38.903 06:57:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.903 1+0 records in 00:04:38.903 1+0 records out 00:04:38.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318466 s, 12.9 MB/s 00:04:38.903 06:57:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:38.903 06:57:22 -- common/autotest_common.sh@874 -- # size=4096 00:04:38.903 06:57:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:38.903 06:57:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:38.903 06:57:22 -- common/autotest_common.sh@877 -- # return 0 00:04:38.903 06:57:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.903 06:57:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.903 06:57:22 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.903 06:57:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.903 06:57:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.162 06:57:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:39.162 { 00:04:39.162 "bdev_name": "Malloc0", 00:04:39.162 "nbd_device": "/dev/nbd0" 00:04:39.162 }, 00:04:39.162 { 00:04:39.162 "bdev_name": "Malloc1", 00:04:39.162 "nbd_device": "/dev/nbd1" 00:04:39.162 } 00:04:39.162 ]' 00:04:39.162 06:57:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:39.162 { 00:04:39.162 "bdev_name": "Malloc0", 00:04:39.162 "nbd_device": "/dev/nbd0" 00:04:39.162 }, 00:04:39.163 { 00:04:39.163 "bdev_name": "Malloc1", 00:04:39.163 "nbd_device": "/dev/nbd1" 00:04:39.163 } 00:04:39.163 ]' 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:39.163 /dev/nbd1' 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:39.163 /dev/nbd1' 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@65 -- # count=2 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@95 -- # count=2 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:39.163 256+0 records in 00:04:39.163 256+0 records out 00:04:39.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00663863 s, 158 MB/s 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:39.163 256+0 records in 00:04:39.163 256+0 records out 00:04:39.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023059 s, 45.5 MB/s 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:39.163 256+0 records in 00:04:39.163 256+0 records out 00:04:39.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238957 s, 43.9 MB/s 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@51 -- # local i 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.163 06:57:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:39.442 06:57:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:39.442 06:57:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:39.442 06:57:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:39.442 06:57:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.442 06:57:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.442 06:57:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:39.442 06:57:23 -- bdev/nbd_common.sh@41 -- # break 00:04:39.442 06:57:23 -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.442 06:57:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.442 06:57:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:39.708 06:57:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:39.708 06:57:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:39.708 06:57:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:39.708 06:57:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.708 06:57:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.708 06:57:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:39.708 06:57:23 -- bdev/nbd_common.sh@41 -- # break 00:04:39.708 06:57:23 -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.708 06:57:23 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.708 06:57:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.708 06:57:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.967 06:57:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:39.967 06:57:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:39.967 06:57:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.225 06:57:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:40.225 06:57:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:40.225 06:57:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.225 06:57:24 -- bdev/nbd_common.sh@65 -- # true 00:04:40.225 06:57:24 -- bdev/nbd_common.sh@65 -- # count=0 00:04:40.225 06:57:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:40.225 06:57:24 -- bdev/nbd_common.sh@104 -- # count=0 00:04:40.225 06:57:24 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:40.225 06:57:24 -- bdev/nbd_common.sh@109 -- # return 0 00:04:40.225 06:57:24 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:40.493 06:57:24 -- event/event.sh@35 -- # sleep 3 00:04:40.753 [2024-07-11 06:57:24.658741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.753 [2024-07-11 06:57:24.728890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.753 [2024-07-11 06:57:24.728897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.753 [2024-07-11 06:57:24.799936] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:40.753 [2024-07-11 06:57:24.800009] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.035 06:57:27 -- event/event.sh@23 -- # for i in {0..2} 00:04:44.035 spdk_app_start Round 1 00:04:44.035 06:57:27 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:44.035 06:57:27 -- event/event.sh@25 -- # waitforlisten 56741 /var/tmp/spdk-nbd.sock 00:04:44.035 06:57:27 -- common/autotest_common.sh@819 -- # '[' -z 56741 ']' 00:04:44.035 06:57:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.035 06:57:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:44.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.035 06:57:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.035 06:57:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:44.035 06:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:44.035 06:57:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:44.035 06:57:27 -- common/autotest_common.sh@852 -- # return 0 00:04:44.035 06:57:27 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.035 Malloc0 00:04:44.035 06:57:27 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.295 Malloc1 00:04:44.295 06:57:28 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@12 -- # local i 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:44.295 /dev/nbd0 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:44.295 06:57:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:44.295 06:57:28 -- common/autotest_common.sh@857 -- # local i 00:04:44.295 06:57:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:44.295 06:57:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:44.295 06:57:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:44.295 06:57:28 -- common/autotest_common.sh@861 -- # break 00:04:44.295 06:57:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:44.295 06:57:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:44.295 06:57:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.295 1+0 records in 00:04:44.295 1+0 records out 00:04:44.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386275 s, 10.6 MB/s 00:04:44.295 06:57:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.295 06:57:28 -- common/autotest_common.sh@874 -- # size=4096 00:04:44.295 06:57:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.295 06:57:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:44.295 06:57:28 -- common/autotest_common.sh@877 -- # return 0 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.295 06:57:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.555 /dev/nbd1 00:04:44.555 06:57:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:44.555 06:57:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:44.555 06:57:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:44.555 06:57:28 -- common/autotest_common.sh@857 -- # local i 00:04:44.555 06:57:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:44.555 06:57:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:44.555 06:57:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:44.555 06:57:28 -- common/autotest_common.sh@861 -- # break 00:04:44.555 06:57:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:44.555 06:57:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:44.555 06:57:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.555 1+0 records in 00:04:44.555 1+0 records out 00:04:44.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274444 s, 14.9 MB/s 00:04:44.555 06:57:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.555 06:57:28 -- common/autotest_common.sh@874 -- # size=4096 00:04:44.555 06:57:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.555 06:57:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:44.555 06:57:28 -- common/autotest_common.sh@877 -- # return 0 00:04:44.555 06:57:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.555 06:57:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.555 06:57:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.555 06:57:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.555 06:57:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.124 { 00:04:45.124 "bdev_name": "Malloc0", 00:04:45.124 "nbd_device": "/dev/nbd0" 00:04:45.124 }, 00:04:45.124 { 00:04:45.124 "bdev_name": "Malloc1", 00:04:45.124 "nbd_device": "/dev/nbd1" 00:04:45.124 } 00:04:45.124 ]' 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.124 { 00:04:45.124 "bdev_name": "Malloc0", 00:04:45.124 "nbd_device": "/dev/nbd0" 00:04:45.124 }, 00:04:45.124 { 00:04:45.124 "bdev_name": "Malloc1", 00:04:45.124 "nbd_device": "/dev/nbd1" 00:04:45.124 } 00:04:45.124 ]' 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.124 /dev/nbd1' 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.124 /dev/nbd1' 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.124 256+0 records in 00:04:45.124 256+0 records out 00:04:45.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595434 s, 176 MB/s 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.124 256+0 records in 00:04:45.124 256+0 records out 00:04:45.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024655 s, 42.5 MB/s 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.124 06:57:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.124 256+0 records in 00:04:45.124 256+0 records out 00:04:45.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276119 s, 38.0 MB/s 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@51 -- # local i 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.124 06:57:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:45.383 06:57:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:45.383 06:57:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:45.383 06:57:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:45.383 06:57:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.383 06:57:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.383 06:57:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:45.383 06:57:29 -- bdev/nbd_common.sh@41 -- # break 00:04:45.383 06:57:29 -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.383 06:57:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.383 06:57:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:45.641 06:57:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:45.641 06:57:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:45.641 06:57:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:45.641 06:57:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.641 06:57:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.641 06:57:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:45.641 06:57:29 -- bdev/nbd_common.sh@41 -- # break 00:04:45.641 06:57:29 -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.641 06:57:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.641 06:57:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.641 06:57:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.898 06:57:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:45.898 06:57:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:45.898 06:57:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.898 06:57:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:45.899 06:57:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:45.899 06:57:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.899 06:57:29 -- bdev/nbd_common.sh@65 -- # true 00:04:45.899 06:57:29 -- bdev/nbd_common.sh@65 -- # count=0 00:04:45.899 06:57:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:45.899 06:57:29 -- bdev/nbd_common.sh@104 -- # count=0 00:04:45.899 06:57:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:45.899 06:57:29 -- bdev/nbd_common.sh@109 -- # return 0 00:04:45.899 06:57:29 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.157 06:57:30 -- event/event.sh@35 -- # sleep 3 00:04:46.416 [2024-07-11 06:57:30.375075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.416 [2024-07-11 06:57:30.474471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.416 [2024-07-11 06:57:30.474478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.675 [2024-07-11 06:57:30.555328] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.675 [2024-07-11 06:57:30.555433] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:49.207 06:57:33 -- event/event.sh@23 -- # for i in {0..2} 00:04:49.207 spdk_app_start Round 2 00:04:49.207 06:57:33 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:49.207 06:57:33 -- event/event.sh@25 -- # waitforlisten 56741 /var/tmp/spdk-nbd.sock 00:04:49.207 06:57:33 -- common/autotest_common.sh@819 -- # '[' -z 56741 ']' 00:04:49.207 06:57:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.207 06:57:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:49.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.207 06:57:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.207 06:57:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:49.207 06:57:33 -- common/autotest_common.sh@10 -- # set +x 00:04:49.466 06:57:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:49.466 06:57:33 -- common/autotest_common.sh@852 -- # return 0 00:04:49.466 06:57:33 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.749 Malloc0 00:04:49.749 06:57:33 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.749 Malloc1 00:04:49.749 06:57:33 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@12 -- # local i 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.749 06:57:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.012 /dev/nbd0 00:04:50.012 06:57:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.012 06:57:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.012 06:57:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:50.012 06:57:33 -- common/autotest_common.sh@857 -- # local i 00:04:50.012 06:57:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:50.012 06:57:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:50.012 06:57:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:50.012 06:57:33 -- common/autotest_common.sh@861 -- # break 00:04:50.012 06:57:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:50.012 06:57:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:50.012 06:57:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.012 1+0 records in 00:04:50.012 1+0 records out 00:04:50.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214894 s, 19.1 MB/s 00:04:50.012 06:57:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.012 06:57:33 -- common/autotest_common.sh@874 -- # size=4096 00:04:50.012 06:57:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.012 06:57:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:50.012 06:57:33 -- common/autotest_common.sh@877 -- # return 0 00:04:50.012 06:57:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.012 06:57:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.012 06:57:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.271 /dev/nbd1 00:04:50.271 06:57:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:50.272 06:57:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:50.272 06:57:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:50.272 06:57:34 -- common/autotest_common.sh@857 -- # local i 00:04:50.272 06:57:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:50.272 06:57:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:50.272 06:57:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:50.272 06:57:34 -- common/autotest_common.sh@861 -- # break 00:04:50.272 06:57:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:50.272 06:57:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:50.272 06:57:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.272 1+0 records in 00:04:50.272 1+0 records out 00:04:50.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312756 s, 13.1 MB/s 00:04:50.272 06:57:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.272 06:57:34 -- common/autotest_common.sh@874 -- # size=4096 00:04:50.272 06:57:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.272 06:57:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:50.272 06:57:34 -- common/autotest_common.sh@877 -- # return 0 00:04:50.272 06:57:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.272 06:57:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.272 06:57:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.272 06:57:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.272 06:57:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.530 06:57:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:50.530 { 00:04:50.530 "bdev_name": "Malloc0", 00:04:50.530 "nbd_device": "/dev/nbd0" 00:04:50.530 }, 00:04:50.530 { 00:04:50.530 "bdev_name": "Malloc1", 00:04:50.530 "nbd_device": "/dev/nbd1" 00:04:50.530 } 00:04:50.530 ]' 00:04:50.530 06:57:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:50.530 { 00:04:50.530 "bdev_name": "Malloc0", 00:04:50.530 "nbd_device": "/dev/nbd0" 00:04:50.530 }, 00:04:50.530 { 00:04:50.530 "bdev_name": "Malloc1", 00:04:50.530 "nbd_device": "/dev/nbd1" 00:04:50.530 } 00:04:50.530 ]' 00:04:50.530 06:57:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.789 06:57:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:50.789 /dev/nbd1' 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:50.790 /dev/nbd1' 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@65 -- # count=2 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@95 -- # count=2 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:50.790 256+0 records in 00:04:50.790 256+0 records out 00:04:50.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00689409 s, 152 MB/s 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:50.790 256+0 records in 00:04:50.790 256+0 records out 00:04:50.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216886 s, 48.3 MB/s 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:50.790 256+0 records in 00:04:50.790 256+0 records out 00:04:50.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275353 s, 38.1 MB/s 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@51 -- # local i 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.790 06:57:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:51.048 06:57:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:51.048 06:57:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:51.048 06:57:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:51.048 06:57:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.048 06:57:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.048 06:57:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:51.048 06:57:34 -- bdev/nbd_common.sh@41 -- # break 00:04:51.048 06:57:34 -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.048 06:57:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.048 06:57:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:51.307 06:57:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:51.307 06:57:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:51.307 06:57:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:51.307 06:57:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.307 06:57:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.307 06:57:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:51.307 06:57:35 -- bdev/nbd_common.sh@41 -- # break 00:04:51.307 06:57:35 -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.307 06:57:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.307 06:57:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.307 06:57:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.565 06:57:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:51.565 06:57:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:51.565 06:57:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.565 06:57:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:51.565 06:57:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:51.565 06:57:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.565 06:57:35 -- bdev/nbd_common.sh@65 -- # true 00:04:51.565 06:57:35 -- bdev/nbd_common.sh@65 -- # count=0 00:04:51.565 06:57:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:51.565 06:57:35 -- bdev/nbd_common.sh@104 -- # count=0 00:04:51.565 06:57:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:51.565 06:57:35 -- bdev/nbd_common.sh@109 -- # return 0 00:04:51.565 06:57:35 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:51.824 06:57:35 -- event/event.sh@35 -- # sleep 3 00:04:52.083 [2024-07-11 06:57:36.031161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.083 [2024-07-11 06:57:36.099677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.083 [2024-07-11 06:57:36.099687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.342 [2024-07-11 06:57:36.171306] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:52.342 [2024-07-11 06:57:36.171380] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.873 06:57:38 -- event/event.sh@38 -- # waitforlisten 56741 /var/tmp/spdk-nbd.sock 00:04:54.873 06:57:38 -- common/autotest_common.sh@819 -- # '[' -z 56741 ']' 00:04:54.873 06:57:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.873 06:57:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:54.873 06:57:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.873 06:57:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:54.873 06:57:38 -- common/autotest_common.sh@10 -- # set +x 00:04:55.131 06:57:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:55.131 06:57:39 -- common/autotest_common.sh@852 -- # return 0 00:04:55.131 06:57:39 -- event/event.sh@39 -- # killprocess 56741 00:04:55.131 06:57:39 -- common/autotest_common.sh@926 -- # '[' -z 56741 ']' 00:04:55.132 06:57:39 -- common/autotest_common.sh@930 -- # kill -0 56741 00:04:55.132 06:57:39 -- common/autotest_common.sh@931 -- # uname 00:04:55.132 06:57:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:55.132 06:57:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56741 00:04:55.132 killing process with pid 56741 00:04:55.132 06:57:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:55.132 06:57:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:55.132 06:57:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56741' 00:04:55.132 06:57:39 -- common/autotest_common.sh@945 -- # kill 56741 00:04:55.132 06:57:39 -- common/autotest_common.sh@950 -- # wait 56741 00:04:55.390 spdk_app_start is called in Round 0. 00:04:55.390 Shutdown signal received, stop current app iteration 00:04:55.390 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:04:55.390 spdk_app_start is called in Round 1. 00:04:55.390 Shutdown signal received, stop current app iteration 00:04:55.390 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:04:55.390 spdk_app_start is called in Round 2. 00:04:55.390 Shutdown signal received, stop current app iteration 00:04:55.390 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:04:55.390 spdk_app_start is called in Round 3. 00:04:55.390 Shutdown signal received, stop current app iteration 00:04:55.390 06:57:39 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:55.390 06:57:39 -- event/event.sh@42 -- # return 0 00:04:55.390 00:04:55.390 real 0m18.639s 00:04:55.390 user 0m40.962s 00:04:55.390 sys 0m3.201s 00:04:55.390 06:57:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.390 ************************************ 00:04:55.390 END TEST app_repeat 00:04:55.390 ************************************ 00:04:55.390 06:57:39 -- common/autotest_common.sh@10 -- # set +x 00:04:55.390 06:57:39 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:55.390 06:57:39 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:55.390 06:57:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.390 06:57:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.390 06:57:39 -- common/autotest_common.sh@10 -- # set +x 00:04:55.390 ************************************ 00:04:55.390 START TEST cpu_locks 00:04:55.390 ************************************ 00:04:55.390 06:57:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:55.648 * Looking for test storage... 00:04:55.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:55.648 06:57:39 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:55.648 06:57:39 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:55.648 06:57:39 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:55.648 06:57:39 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:55.648 06:57:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.648 06:57:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.648 06:57:39 -- common/autotest_common.sh@10 -- # set +x 00:04:55.648 ************************************ 00:04:55.648 START TEST default_locks 00:04:55.648 ************************************ 00:04:55.648 06:57:39 -- common/autotest_common.sh@1104 -- # default_locks 00:04:55.648 06:57:39 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57367 00:04:55.648 06:57:39 -- event/cpu_locks.sh@47 -- # waitforlisten 57367 00:04:55.648 06:57:39 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.648 06:57:39 -- common/autotest_common.sh@819 -- # '[' -z 57367 ']' 00:04:55.648 06:57:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.648 06:57:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:55.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.648 06:57:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.648 06:57:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:55.648 06:57:39 -- common/autotest_common.sh@10 -- # set +x 00:04:55.648 [2024-07-11 06:57:39.546471] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:55.648 [2024-07-11 06:57:39.546573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57367 ] 00:04:55.648 [2024-07-11 06:57:39.683315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.907 [2024-07-11 06:57:39.780553] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:55.907 [2024-07-11 06:57:39.780725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.474 06:57:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:56.474 06:57:40 -- common/autotest_common.sh@852 -- # return 0 00:04:56.474 06:57:40 -- event/cpu_locks.sh@49 -- # locks_exist 57367 00:04:56.474 06:57:40 -- event/cpu_locks.sh@22 -- # lslocks -p 57367 00:04:56.474 06:57:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.733 06:57:40 -- event/cpu_locks.sh@50 -- # killprocess 57367 00:04:56.733 06:57:40 -- common/autotest_common.sh@926 -- # '[' -z 57367 ']' 00:04:56.733 06:57:40 -- common/autotest_common.sh@930 -- # kill -0 57367 00:04:56.733 06:57:40 -- common/autotest_common.sh@931 -- # uname 00:04:56.733 06:57:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:56.733 06:57:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57367 00:04:56.733 06:57:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:56.733 06:57:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:56.733 killing process with pid 57367 00:04:56.733 06:57:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57367' 00:04:56.733 06:57:40 -- common/autotest_common.sh@945 -- # kill 57367 00:04:56.733 06:57:40 -- common/autotest_common.sh@950 -- # wait 57367 00:04:57.300 06:57:41 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57367 00:04:57.300 06:57:41 -- common/autotest_common.sh@640 -- # local es=0 00:04:57.300 06:57:41 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57367 00:04:57.300 06:57:41 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:57.300 06:57:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:57.300 06:57:41 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:57.300 06:57:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:57.300 06:57:41 -- common/autotest_common.sh@643 -- # waitforlisten 57367 00:04:57.300 06:57:41 -- common/autotest_common.sh@819 -- # '[' -z 57367 ']' 00:04:57.300 06:57:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.300 06:57:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:57.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.300 06:57:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.300 06:57:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:57.300 06:57:41 -- common/autotest_common.sh@10 -- # set +x 00:04:57.300 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57367) - No such process 00:04:57.300 ERROR: process (pid: 57367) is no longer running 00:04:57.300 06:57:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:57.300 06:57:41 -- common/autotest_common.sh@852 -- # return 1 00:04:57.300 06:57:41 -- common/autotest_common.sh@643 -- # es=1 00:04:57.300 06:57:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:57.300 06:57:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:57.300 06:57:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:57.300 06:57:41 -- event/cpu_locks.sh@54 -- # no_locks 00:04:57.300 06:57:41 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:57.300 06:57:41 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:57.300 06:57:41 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:57.300 00:04:57.300 real 0m1.867s 00:04:57.300 user 0m1.861s 00:04:57.300 sys 0m0.573s 00:04:57.300 06:57:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.300 ************************************ 00:04:57.300 END TEST default_locks 00:04:57.300 06:57:41 -- common/autotest_common.sh@10 -- # set +x 00:04:57.300 ************************************ 00:04:57.559 06:57:41 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:57.559 06:57:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.559 06:57:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.559 06:57:41 -- common/autotest_common.sh@10 -- # set +x 00:04:57.559 ************************************ 00:04:57.559 START TEST default_locks_via_rpc 00:04:57.559 ************************************ 00:04:57.559 06:57:41 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:04:57.559 06:57:41 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57431 00:04:57.559 06:57:41 -- event/cpu_locks.sh@63 -- # waitforlisten 57431 00:04:57.559 06:57:41 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.559 06:57:41 -- common/autotest_common.sh@819 -- # '[' -z 57431 ']' 00:04:57.559 06:57:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.559 06:57:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:57.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.559 06:57:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.559 06:57:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:57.559 06:57:41 -- common/autotest_common.sh@10 -- # set +x 00:04:57.559 [2024-07-11 06:57:41.462403] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:57.559 [2024-07-11 06:57:41.462525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57431 ] 00:04:57.559 [2024-07-11 06:57:41.598469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.817 [2024-07-11 06:57:41.690366] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.817 [2024-07-11 06:57:41.690603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.384 06:57:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:58.384 06:57:42 -- common/autotest_common.sh@852 -- # return 0 00:04:58.384 06:57:42 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:58.384 06:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:58.384 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:04:58.384 06:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:58.384 06:57:42 -- event/cpu_locks.sh@67 -- # no_locks 00:04:58.384 06:57:42 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:58.384 06:57:42 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:58.384 06:57:42 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:58.384 06:57:42 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:58.384 06:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:58.384 06:57:42 -- common/autotest_common.sh@10 -- # set +x 00:04:58.384 06:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:58.384 06:57:42 -- event/cpu_locks.sh@71 -- # locks_exist 57431 00:04:58.384 06:57:42 -- event/cpu_locks.sh@22 -- # lslocks -p 57431 00:04:58.384 06:57:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.951 06:57:42 -- event/cpu_locks.sh@73 -- # killprocess 57431 00:04:58.951 06:57:42 -- common/autotest_common.sh@926 -- # '[' -z 57431 ']' 00:04:58.951 06:57:42 -- common/autotest_common.sh@930 -- # kill -0 57431 00:04:58.951 06:57:42 -- common/autotest_common.sh@931 -- # uname 00:04:58.951 06:57:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:58.951 06:57:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57431 00:04:58.951 06:57:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:58.951 06:57:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:58.951 killing process with pid 57431 00:04:58.951 06:57:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57431' 00:04:58.951 06:57:42 -- common/autotest_common.sh@945 -- # kill 57431 00:04:58.951 06:57:42 -- common/autotest_common.sh@950 -- # wait 57431 00:04:59.518 00:04:59.518 real 0m1.923s 00:04:59.518 user 0m1.970s 00:04:59.518 sys 0m0.568s 00:04:59.518 06:57:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.518 06:57:43 -- common/autotest_common.sh@10 -- # set +x 00:04:59.518 ************************************ 00:04:59.518 END TEST default_locks_via_rpc 00:04:59.518 ************************************ 00:04:59.518 06:57:43 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:59.518 06:57:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:59.518 06:57:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.518 06:57:43 -- common/autotest_common.sh@10 -- # set +x 00:04:59.518 ************************************ 00:04:59.518 START TEST non_locking_app_on_locked_coremask 00:04:59.518 ************************************ 00:04:59.518 06:57:43 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:04:59.518 06:57:43 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57500 00:04:59.518 06:57:43 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.518 06:57:43 -- event/cpu_locks.sh@81 -- # waitforlisten 57500 /var/tmp/spdk.sock 00:04:59.518 06:57:43 -- common/autotest_common.sh@819 -- # '[' -z 57500 ']' 00:04:59.518 06:57:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.518 06:57:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:59.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.518 06:57:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.518 06:57:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:59.518 06:57:43 -- common/autotest_common.sh@10 -- # set +x 00:04:59.518 [2024-07-11 06:57:43.425019] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:59.518 [2024-07-11 06:57:43.425108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57500 ] 00:04:59.518 [2024-07-11 06:57:43.555271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.776 [2024-07-11 06:57:43.664947] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.776 [2024-07-11 06:57:43.665101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.340 06:57:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:00.340 06:57:44 -- common/autotest_common.sh@852 -- # return 0 00:05:00.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.340 06:57:44 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57528 00:05:00.340 06:57:44 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:00.340 06:57:44 -- event/cpu_locks.sh@85 -- # waitforlisten 57528 /var/tmp/spdk2.sock 00:05:00.340 06:57:44 -- common/autotest_common.sh@819 -- # '[' -z 57528 ']' 00:05:00.340 06:57:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.340 06:57:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:00.340 06:57:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.340 06:57:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:00.340 06:57:44 -- common/autotest_common.sh@10 -- # set +x 00:05:00.597 [2024-07-11 06:57:44.413558] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:00.597 [2024-07-11 06:57:44.414366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57528 ] 00:05:00.597 [2024-07-11 06:57:44.553567] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.597 [2024-07-11 06:57:44.553609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.855 [2024-07-11 06:57:44.762475] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:00.855 [2024-07-11 06:57:44.762657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.245 06:57:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:02.245 06:57:46 -- common/autotest_common.sh@852 -- # return 0 00:05:02.245 06:57:46 -- event/cpu_locks.sh@87 -- # locks_exist 57500 00:05:02.245 06:57:46 -- event/cpu_locks.sh@22 -- # lslocks -p 57500 00:05:02.245 06:57:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.812 06:57:46 -- event/cpu_locks.sh@89 -- # killprocess 57500 00:05:02.812 06:57:46 -- common/autotest_common.sh@926 -- # '[' -z 57500 ']' 00:05:02.812 06:57:46 -- common/autotest_common.sh@930 -- # kill -0 57500 00:05:02.812 06:57:46 -- common/autotest_common.sh@931 -- # uname 00:05:02.812 06:57:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:02.812 06:57:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57500 00:05:03.070 killing process with pid 57500 00:05:03.070 06:57:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:03.070 06:57:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:03.070 06:57:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57500' 00:05:03.070 06:57:46 -- common/autotest_common.sh@945 -- # kill 57500 00:05:03.070 06:57:46 -- common/autotest_common.sh@950 -- # wait 57500 00:05:04.006 06:57:48 -- event/cpu_locks.sh@90 -- # killprocess 57528 00:05:04.006 06:57:48 -- common/autotest_common.sh@926 -- # '[' -z 57528 ']' 00:05:04.006 06:57:48 -- common/autotest_common.sh@930 -- # kill -0 57528 00:05:04.006 06:57:48 -- common/autotest_common.sh@931 -- # uname 00:05:04.006 06:57:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:04.006 06:57:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57528 00:05:04.006 killing process with pid 57528 00:05:04.006 06:57:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:04.006 06:57:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:04.006 06:57:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57528' 00:05:04.006 06:57:48 -- common/autotest_common.sh@945 -- # kill 57528 00:05:04.006 06:57:48 -- common/autotest_common.sh@950 -- # wait 57528 00:05:04.574 ************************************ 00:05:04.574 END TEST non_locking_app_on_locked_coremask 00:05:04.574 ************************************ 00:05:04.574 00:05:04.574 real 0m5.224s 00:05:04.574 user 0m5.603s 00:05:04.574 sys 0m1.312s 00:05:04.574 06:57:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.574 06:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:04.833 06:57:48 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:04.833 06:57:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.833 06:57:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.833 06:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:04.833 ************************************ 00:05:04.833 START TEST locking_app_on_unlocked_coremask 00:05:04.833 ************************************ 00:05:04.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.833 06:57:48 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:04.833 06:57:48 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57626 00:05:04.833 06:57:48 -- event/cpu_locks.sh@99 -- # waitforlisten 57626 /var/tmp/spdk.sock 00:05:04.833 06:57:48 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:04.833 06:57:48 -- common/autotest_common.sh@819 -- # '[' -z 57626 ']' 00:05:04.833 06:57:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.833 06:57:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:04.833 06:57:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.833 06:57:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:04.833 06:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:04.833 [2024-07-11 06:57:48.739514] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:04.833 [2024-07-11 06:57:48.740682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57626 ] 00:05:04.833 [2024-07-11 06:57:48.879328] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.833 [2024-07-11 06:57:48.879612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.092 [2024-07-11 06:57:48.979315] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:05.092 [2024-07-11 06:57:48.979494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.660 06:57:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:05.660 06:57:49 -- common/autotest_common.sh@852 -- # return 0 00:05:05.660 06:57:49 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57654 00:05:05.660 06:57:49 -- event/cpu_locks.sh@103 -- # waitforlisten 57654 /var/tmp/spdk2.sock 00:05:05.660 06:57:49 -- common/autotest_common.sh@819 -- # '[' -z 57654 ']' 00:05:05.660 06:57:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.660 06:57:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:05.660 06:57:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.660 06:57:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:05.660 06:57:49 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:05.660 06:57:49 -- common/autotest_common.sh@10 -- # set +x 00:05:05.919 [2024-07-11 06:57:49.729883] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:05.919 [2024-07-11 06:57:49.729967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57654 ] 00:05:05.919 [2024-07-11 06:57:49.865233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.177 [2024-07-11 06:57:50.113062] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:06.177 [2024-07-11 06:57:50.113218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.554 06:57:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:07.554 06:57:51 -- common/autotest_common.sh@852 -- # return 0 00:05:07.554 06:57:51 -- event/cpu_locks.sh@105 -- # locks_exist 57654 00:05:07.554 06:57:51 -- event/cpu_locks.sh@22 -- # lslocks -p 57654 00:05:07.554 06:57:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.122 06:57:51 -- event/cpu_locks.sh@107 -- # killprocess 57626 00:05:08.122 06:57:51 -- common/autotest_common.sh@926 -- # '[' -z 57626 ']' 00:05:08.122 06:57:51 -- common/autotest_common.sh@930 -- # kill -0 57626 00:05:08.122 06:57:51 -- common/autotest_common.sh@931 -- # uname 00:05:08.122 06:57:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:08.122 06:57:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57626 00:05:08.122 killing process with pid 57626 00:05:08.122 06:57:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:08.122 06:57:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:08.122 06:57:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57626' 00:05:08.122 06:57:51 -- common/autotest_common.sh@945 -- # kill 57626 00:05:08.122 06:57:51 -- common/autotest_common.sh@950 -- # wait 57626 00:05:09.058 06:57:53 -- event/cpu_locks.sh@108 -- # killprocess 57654 00:05:09.058 06:57:53 -- common/autotest_common.sh@926 -- # '[' -z 57654 ']' 00:05:09.058 06:57:53 -- common/autotest_common.sh@930 -- # kill -0 57654 00:05:09.058 06:57:53 -- common/autotest_common.sh@931 -- # uname 00:05:09.058 06:57:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:09.058 06:57:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57654 00:05:09.316 06:57:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:09.316 killing process with pid 57654 00:05:09.316 06:57:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:09.316 06:57:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57654' 00:05:09.316 06:57:53 -- common/autotest_common.sh@945 -- # kill 57654 00:05:09.316 06:57:53 -- common/autotest_common.sh@950 -- # wait 57654 00:05:09.882 ************************************ 00:05:09.882 END TEST locking_app_on_unlocked_coremask 00:05:09.882 ************************************ 00:05:09.882 00:05:09.882 real 0m5.045s 00:05:09.882 user 0m5.445s 00:05:09.882 sys 0m1.197s 00:05:09.882 06:57:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.882 06:57:53 -- common/autotest_common.sh@10 -- # set +x 00:05:09.882 06:57:53 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:09.882 06:57:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.882 06:57:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.882 06:57:53 -- common/autotest_common.sh@10 -- # set +x 00:05:09.882 ************************************ 00:05:09.882 START TEST locking_app_on_locked_coremask 00:05:09.882 ************************************ 00:05:09.882 06:57:53 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:09.882 06:57:53 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57752 00:05:09.882 06:57:53 -- event/cpu_locks.sh@116 -- # waitforlisten 57752 /var/tmp/spdk.sock 00:05:09.882 06:57:53 -- common/autotest_common.sh@819 -- # '[' -z 57752 ']' 00:05:09.882 06:57:53 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.882 06:57:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.882 06:57:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:09.882 06:57:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.882 06:57:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:09.882 06:57:53 -- common/autotest_common.sh@10 -- # set +x 00:05:09.882 [2024-07-11 06:57:53.811787] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:09.882 [2024-07-11 06:57:53.811901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57752 ] 00:05:10.140 [2024-07-11 06:57:53.948478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.140 [2024-07-11 06:57:54.063622] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:10.140 [2024-07-11 06:57:54.063792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.723 06:57:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:10.723 06:57:54 -- common/autotest_common.sh@852 -- # return 0 00:05:10.723 06:57:54 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:10.723 06:57:54 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57781 00:05:10.723 06:57:54 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57781 /var/tmp/spdk2.sock 00:05:10.723 06:57:54 -- common/autotest_common.sh@640 -- # local es=0 00:05:10.723 06:57:54 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57781 /var/tmp/spdk2.sock 00:05:10.723 06:57:54 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:10.723 06:57:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:10.723 06:57:54 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:10.723 06:57:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:10.723 06:57:54 -- common/autotest_common.sh@643 -- # waitforlisten 57781 /var/tmp/spdk2.sock 00:05:10.723 06:57:54 -- common/autotest_common.sh@819 -- # '[' -z 57781 ']' 00:05:10.723 06:57:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.723 06:57:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:10.723 06:57:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.723 06:57:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:10.723 06:57:54 -- common/autotest_common.sh@10 -- # set +x 00:05:10.724 [2024-07-11 06:57:54.768406] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:10.724 [2024-07-11 06:57:54.768680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57781 ] 00:05:10.981 [2024-07-11 06:57:54.904260] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57752 has claimed it. 00:05:10.981 [2024-07-11 06:57:54.904348] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:11.546 ERROR: process (pid: 57781) is no longer running 00:05:11.546 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57781) - No such process 00:05:11.546 06:57:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:11.546 06:57:55 -- common/autotest_common.sh@852 -- # return 1 00:05:11.546 06:57:55 -- common/autotest_common.sh@643 -- # es=1 00:05:11.546 06:57:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:11.546 06:57:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:11.546 06:57:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:11.546 06:57:55 -- event/cpu_locks.sh@122 -- # locks_exist 57752 00:05:11.547 06:57:55 -- event/cpu_locks.sh@22 -- # lslocks -p 57752 00:05:11.547 06:57:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.112 06:57:55 -- event/cpu_locks.sh@124 -- # killprocess 57752 00:05:12.112 06:57:55 -- common/autotest_common.sh@926 -- # '[' -z 57752 ']' 00:05:12.112 06:57:55 -- common/autotest_common.sh@930 -- # kill -0 57752 00:05:12.112 06:57:55 -- common/autotest_common.sh@931 -- # uname 00:05:12.112 06:57:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:12.112 06:57:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57752 00:05:12.112 killing process with pid 57752 00:05:12.112 06:57:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:12.112 06:57:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:12.112 06:57:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57752' 00:05:12.112 06:57:55 -- common/autotest_common.sh@945 -- # kill 57752 00:05:12.112 06:57:55 -- common/autotest_common.sh@950 -- # wait 57752 00:05:12.680 ************************************ 00:05:12.680 END TEST locking_app_on_locked_coremask 00:05:12.680 ************************************ 00:05:12.680 00:05:12.680 real 0m2.817s 00:05:12.680 user 0m3.084s 00:05:12.680 sys 0m0.726s 00:05:12.680 06:57:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.680 06:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:12.680 06:57:56 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:12.680 06:57:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:12.680 06:57:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.680 06:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:12.680 ************************************ 00:05:12.680 START TEST locking_overlapped_coremask 00:05:12.680 ************************************ 00:05:12.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.680 06:57:56 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:12.680 06:57:56 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57837 00:05:12.680 06:57:56 -- event/cpu_locks.sh@133 -- # waitforlisten 57837 /var/tmp/spdk.sock 00:05:12.680 06:57:56 -- common/autotest_common.sh@819 -- # '[' -z 57837 ']' 00:05:12.680 06:57:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.680 06:57:56 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:12.680 06:57:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:12.680 06:57:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.680 06:57:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:12.680 06:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:12.680 [2024-07-11 06:57:56.687540] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:12.680 [2024-07-11 06:57:56.687636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57837 ] 00:05:12.939 [2024-07-11 06:57:56.826390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.939 [2024-07-11 06:57:56.965274] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.939 [2024-07-11 06:57:56.965886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.939 [2024-07-11 06:57:56.966137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.939 [2024-07-11 06:57:56.966145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.876 06:57:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:13.876 06:57:57 -- common/autotest_common.sh@852 -- # return 0 00:05:13.876 06:57:57 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57867 00:05:13.876 06:57:57 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:13.876 06:57:57 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57867 /var/tmp/spdk2.sock 00:05:13.876 06:57:57 -- common/autotest_common.sh@640 -- # local es=0 00:05:13.876 06:57:57 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57867 /var/tmp/spdk2.sock 00:05:13.876 06:57:57 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:13.876 06:57:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:13.876 06:57:57 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:13.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.876 06:57:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:13.876 06:57:57 -- common/autotest_common.sh@643 -- # waitforlisten 57867 /var/tmp/spdk2.sock 00:05:13.876 06:57:57 -- common/autotest_common.sh@819 -- # '[' -z 57867 ']' 00:05:13.876 06:57:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.876 06:57:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:13.876 06:57:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.876 06:57:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:13.876 06:57:57 -- common/autotest_common.sh@10 -- # set +x 00:05:13.876 [2024-07-11 06:57:57.722007] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:13.876 [2024-07-11 06:57:57.722137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57867 ] 00:05:13.876 [2024-07-11 06:57:57.865655] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57837 has claimed it. 00:05:13.876 [2024-07-11 06:57:57.868534] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:14.473 ERROR: process (pid: 57867) is no longer running 00:05:14.473 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57867) - No such process 00:05:14.473 06:57:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:14.473 06:57:58 -- common/autotest_common.sh@852 -- # return 1 00:05:14.473 06:57:58 -- common/autotest_common.sh@643 -- # es=1 00:05:14.473 06:57:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:14.473 06:57:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:14.473 06:57:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:14.473 06:57:58 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:14.473 06:57:58 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:14.473 06:57:58 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:14.473 06:57:58 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:14.473 06:57:58 -- event/cpu_locks.sh@141 -- # killprocess 57837 00:05:14.473 06:57:58 -- common/autotest_common.sh@926 -- # '[' -z 57837 ']' 00:05:14.473 06:57:58 -- common/autotest_common.sh@930 -- # kill -0 57837 00:05:14.473 06:57:58 -- common/autotest_common.sh@931 -- # uname 00:05:14.473 06:57:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:14.473 06:57:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57837 00:05:14.473 killing process with pid 57837 00:05:14.473 06:57:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:14.473 06:57:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:14.473 06:57:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57837' 00:05:14.473 06:57:58 -- common/autotest_common.sh@945 -- # kill 57837 00:05:14.473 06:57:58 -- common/autotest_common.sh@950 -- # wait 57837 00:05:15.039 ************************************ 00:05:15.039 END TEST locking_overlapped_coremask 00:05:15.039 ************************************ 00:05:15.039 00:05:15.039 real 0m2.402s 00:05:15.039 user 0m6.319s 00:05:15.039 sys 0m0.527s 00:05:15.039 06:57:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.039 06:57:59 -- common/autotest_common.sh@10 -- # set +x 00:05:15.039 06:57:59 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:15.039 06:57:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.039 06:57:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.039 06:57:59 -- common/autotest_common.sh@10 -- # set +x 00:05:15.039 ************************************ 00:05:15.039 START TEST locking_overlapped_coremask_via_rpc 00:05:15.039 ************************************ 00:05:15.039 06:57:59 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:15.039 06:57:59 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=57914 00:05:15.039 06:57:59 -- event/cpu_locks.sh@149 -- # waitforlisten 57914 /var/tmp/spdk.sock 00:05:15.039 06:57:59 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:15.039 06:57:59 -- common/autotest_common.sh@819 -- # '[' -z 57914 ']' 00:05:15.039 06:57:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.039 06:57:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:15.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.039 06:57:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.039 06:57:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:15.039 06:57:59 -- common/autotest_common.sh@10 -- # set +x 00:05:15.297 [2024-07-11 06:57:59.151834] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:15.297 [2024-07-11 06:57:59.151968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57914 ] 00:05:15.297 [2024-07-11 06:57:59.289754] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.297 [2024-07-11 06:57:59.289798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.556 [2024-07-11 06:57:59.406357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:15.556 [2024-07-11 06:57:59.406650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.556 [2024-07-11 06:57:59.407084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.556 [2024-07-11 06:57:59.407126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.123 06:58:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:16.123 06:58:00 -- common/autotest_common.sh@852 -- # return 0 00:05:16.123 06:58:00 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=57944 00:05:16.123 06:58:00 -- event/cpu_locks.sh@153 -- # waitforlisten 57944 /var/tmp/spdk2.sock 00:05:16.123 06:58:00 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:16.123 06:58:00 -- common/autotest_common.sh@819 -- # '[' -z 57944 ']' 00:05:16.123 06:58:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.123 06:58:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:16.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.123 06:58:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.123 06:58:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:16.123 06:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:16.123 [2024-07-11 06:58:00.171935] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:16.123 [2024-07-11 06:58:00.172042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57944 ] 00:05:16.381 [2024-07-11 06:58:00.313080] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.381 [2024-07-11 06:58:00.313204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.639 [2024-07-11 06:58:00.480869] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:16.639 [2024-07-11 06:58:00.481324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.639 [2024-07-11 06:58:00.481756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:16.639 [2024-07-11 06:58:00.481762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.205 06:58:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:17.205 06:58:01 -- common/autotest_common.sh@852 -- # return 0 00:05:17.205 06:58:01 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:17.205 06:58:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.205 06:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:17.205 06:58:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.205 06:58:01 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.205 06:58:01 -- common/autotest_common.sh@640 -- # local es=0 00:05:17.205 06:58:01 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.205 06:58:01 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:17.205 06:58:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:17.205 06:58:01 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:17.205 06:58:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:17.205 06:58:01 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.205 06:58:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.205 06:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:17.205 [2024-07-11 06:58:01.188687] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57914 has claimed it. 00:05:17.205 2024/07/11 06:58:01 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:17.205 request: 00:05:17.205 { 00:05:17.205 "method": "framework_enable_cpumask_locks", 00:05:17.205 "params": {} 00:05:17.205 } 00:05:17.205 Got JSON-RPC error response 00:05:17.205 GoRPCClient: error on JSON-RPC call 00:05:17.205 06:58:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:17.205 06:58:01 -- common/autotest_common.sh@643 -- # es=1 00:05:17.205 06:58:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:17.205 06:58:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:17.205 06:58:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:17.205 06:58:01 -- event/cpu_locks.sh@158 -- # waitforlisten 57914 /var/tmp/spdk.sock 00:05:17.205 06:58:01 -- common/autotest_common.sh@819 -- # '[' -z 57914 ']' 00:05:17.205 06:58:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.205 06:58:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:17.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.205 06:58:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.205 06:58:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:17.205 06:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:17.771 06:58:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:17.772 06:58:01 -- common/autotest_common.sh@852 -- # return 0 00:05:17.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.772 06:58:01 -- event/cpu_locks.sh@159 -- # waitforlisten 57944 /var/tmp/spdk2.sock 00:05:17.772 06:58:01 -- common/autotest_common.sh@819 -- # '[' -z 57944 ']' 00:05:17.772 06:58:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.772 06:58:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:17.772 06:58:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.772 06:58:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:17.772 06:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:17.772 06:58:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:17.772 06:58:01 -- common/autotest_common.sh@852 -- # return 0 00:05:17.772 06:58:01 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:17.772 06:58:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:17.772 06:58:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:17.772 ************************************ 00:05:17.772 END TEST locking_overlapped_coremask_via_rpc 00:05:17.772 ************************************ 00:05:17.772 06:58:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:17.772 00:05:17.772 real 0m2.718s 00:05:17.772 user 0m1.378s 00:05:17.772 sys 0m0.259s 00:05:17.772 06:58:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.772 06:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:18.033 06:58:01 -- event/cpu_locks.sh@174 -- # cleanup 00:05:18.033 06:58:01 -- event/cpu_locks.sh@15 -- # [[ -z 57914 ]] 00:05:18.033 06:58:01 -- event/cpu_locks.sh@15 -- # killprocess 57914 00:05:18.033 06:58:01 -- common/autotest_common.sh@926 -- # '[' -z 57914 ']' 00:05:18.033 06:58:01 -- common/autotest_common.sh@930 -- # kill -0 57914 00:05:18.033 06:58:01 -- common/autotest_common.sh@931 -- # uname 00:05:18.033 06:58:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:18.033 06:58:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57914 00:05:18.033 killing process with pid 57914 00:05:18.033 06:58:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:18.033 06:58:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:18.033 06:58:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57914' 00:05:18.034 06:58:01 -- common/autotest_common.sh@945 -- # kill 57914 00:05:18.034 06:58:01 -- common/autotest_common.sh@950 -- # wait 57914 00:05:18.600 06:58:02 -- event/cpu_locks.sh@16 -- # [[ -z 57944 ]] 00:05:18.600 06:58:02 -- event/cpu_locks.sh@16 -- # killprocess 57944 00:05:18.600 06:58:02 -- common/autotest_common.sh@926 -- # '[' -z 57944 ']' 00:05:18.600 06:58:02 -- common/autotest_common.sh@930 -- # kill -0 57944 00:05:18.600 06:58:02 -- common/autotest_common.sh@931 -- # uname 00:05:18.600 06:58:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:18.600 06:58:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57944 00:05:18.600 killing process with pid 57944 00:05:18.600 06:58:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:18.600 06:58:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:18.600 06:58:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57944' 00:05:18.600 06:58:02 -- common/autotest_common.sh@945 -- # kill 57944 00:05:18.600 06:58:02 -- common/autotest_common.sh@950 -- # wait 57944 00:05:18.858 06:58:02 -- event/cpu_locks.sh@18 -- # rm -f 00:05:18.858 Process with pid 57914 is not found 00:05:18.858 Process with pid 57944 is not found 00:05:18.858 06:58:02 -- event/cpu_locks.sh@1 -- # cleanup 00:05:18.858 06:58:02 -- event/cpu_locks.sh@15 -- # [[ -z 57914 ]] 00:05:18.858 06:58:02 -- event/cpu_locks.sh@15 -- # killprocess 57914 00:05:18.858 06:58:02 -- common/autotest_common.sh@926 -- # '[' -z 57914 ']' 00:05:18.858 06:58:02 -- common/autotest_common.sh@930 -- # kill -0 57914 00:05:18.858 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (57914) - No such process 00:05:18.858 06:58:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 57914 is not found' 00:05:18.858 06:58:02 -- event/cpu_locks.sh@16 -- # [[ -z 57944 ]] 00:05:18.858 06:58:02 -- event/cpu_locks.sh@16 -- # killprocess 57944 00:05:18.859 06:58:02 -- common/autotest_common.sh@926 -- # '[' -z 57944 ']' 00:05:18.859 06:58:02 -- common/autotest_common.sh@930 -- # kill -0 57944 00:05:18.859 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (57944) - No such process 00:05:18.859 06:58:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 57944 is not found' 00:05:18.859 06:58:02 -- event/cpu_locks.sh@18 -- # rm -f 00:05:18.859 00:05:18.859 real 0m23.502s 00:05:18.859 user 0m39.293s 00:05:18.859 sys 0m6.098s 00:05:18.859 ************************************ 00:05:18.859 END TEST cpu_locks 00:05:18.859 ************************************ 00:05:18.859 06:58:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.859 06:58:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.116 ************************************ 00:05:19.116 END TEST event 00:05:19.116 ************************************ 00:05:19.116 00:05:19.116 real 0m49.884s 00:05:19.116 user 1m32.792s 00:05:19.116 sys 0m10.100s 00:05:19.117 06:58:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.117 06:58:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.117 06:58:02 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:19.117 06:58:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.117 06:58:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.117 06:58:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.117 ************************************ 00:05:19.117 START TEST thread 00:05:19.117 ************************************ 00:05:19.117 06:58:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:19.117 * Looking for test storage... 00:05:19.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:19.117 06:58:03 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.117 06:58:03 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:19.117 06:58:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.117 06:58:03 -- common/autotest_common.sh@10 -- # set +x 00:05:19.117 ************************************ 00:05:19.117 START TEST thread_poller_perf 00:05:19.117 ************************************ 00:05:19.117 06:58:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.117 [2024-07-11 06:58:03.093331] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:19.117 [2024-07-11 06:58:03.093427] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58095 ] 00:05:19.375 [2024-07-11 06:58:03.217266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.375 [2024-07-11 06:58:03.306384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.375 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:20.749 ====================================== 00:05:20.749 busy:2210695120 (cyc) 00:05:20.750 total_run_count: 310000 00:05:20.750 tsc_hz: 2200000000 (cyc) 00:05:20.750 ====================================== 00:05:20.750 poller_cost: 7131 (cyc), 3241 (nsec) 00:05:20.750 00:05:20.750 real 0m1.372s 00:05:20.750 user 0m1.202s 00:05:20.750 sys 0m0.062s 00:05:20.750 06:58:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.750 ************************************ 00:05:20.750 END TEST thread_poller_perf 00:05:20.750 06:58:04 -- common/autotest_common.sh@10 -- # set +x 00:05:20.750 ************************************ 00:05:20.750 06:58:04 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.750 06:58:04 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:20.750 06:58:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.750 06:58:04 -- common/autotest_common.sh@10 -- # set +x 00:05:20.750 ************************************ 00:05:20.750 START TEST thread_poller_perf 00:05:20.750 ************************************ 00:05:20.750 06:58:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.750 [2024-07-11 06:58:04.522909] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:20.750 [2024-07-11 06:58:04.523009] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58125 ] 00:05:20.750 [2024-07-11 06:58:04.657891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.750 [2024-07-11 06:58:04.760352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.750 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:22.126 ====================================== 00:05:22.126 busy:2202662040 (cyc) 00:05:22.126 total_run_count: 5087000 00:05:22.126 tsc_hz: 2200000000 (cyc) 00:05:22.126 ====================================== 00:05:22.126 poller_cost: 432 (cyc), 196 (nsec) 00:05:22.126 00:05:22.126 real 0m1.408s 00:05:22.126 user 0m1.233s 00:05:22.126 sys 0m0.067s 00:05:22.126 06:58:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.126 ************************************ 00:05:22.126 END TEST thread_poller_perf 00:05:22.126 ************************************ 00:05:22.126 06:58:05 -- common/autotest_common.sh@10 -- # set +x 00:05:22.126 06:58:05 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:22.126 ************************************ 00:05:22.126 END TEST thread 00:05:22.126 ************************************ 00:05:22.126 00:05:22.126 real 0m2.972s 00:05:22.126 user 0m2.503s 00:05:22.126 sys 0m0.248s 00:05:22.126 06:58:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.126 06:58:05 -- common/autotest_common.sh@10 -- # set +x 00:05:22.126 06:58:05 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:22.126 06:58:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.126 06:58:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.126 06:58:05 -- common/autotest_common.sh@10 -- # set +x 00:05:22.126 ************************************ 00:05:22.126 START TEST accel 00:05:22.126 ************************************ 00:05:22.126 06:58:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:22.126 * Looking for test storage... 00:05:22.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:22.126 06:58:06 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:22.126 06:58:06 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:22.126 06:58:06 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:22.126 06:58:06 -- accel/accel.sh@59 -- # spdk_tgt_pid=58204 00:05:22.126 06:58:06 -- accel/accel.sh@60 -- # waitforlisten 58204 00:05:22.126 06:58:06 -- common/autotest_common.sh@819 -- # '[' -z 58204 ']' 00:05:22.126 06:58:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.126 06:58:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:22.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.126 06:58:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.126 06:58:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:22.126 06:58:06 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:22.126 06:58:06 -- common/autotest_common.sh@10 -- # set +x 00:05:22.126 06:58:06 -- accel/accel.sh@58 -- # build_accel_config 00:05:22.126 06:58:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.126 06:58:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.126 06:58:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.126 06:58:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.126 06:58:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.126 06:58:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.126 06:58:06 -- accel/accel.sh@42 -- # jq -r . 00:05:22.126 [2024-07-11 06:58:06.154300] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:22.126 [2024-07-11 06:58:06.154390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58204 ] 00:05:22.385 [2024-07-11 06:58:06.293888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.385 [2024-07-11 06:58:06.421103] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.385 [2024-07-11 06:58:06.421368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.320 06:58:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:23.320 06:58:07 -- common/autotest_common.sh@852 -- # return 0 00:05:23.320 06:58:07 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:23.320 06:58:07 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:23.320 06:58:07 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:23.320 06:58:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.320 06:58:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.320 06:58:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.320 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.320 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.320 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.320 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.320 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.320 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.320 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.320 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.320 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.320 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.320 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.320 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.320 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.320 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.320 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.320 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.320 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.320 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.320 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.320 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.320 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.320 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.320 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.320 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.321 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.321 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.321 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.321 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.321 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.321 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.321 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.321 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.321 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.321 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.321 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.321 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.321 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.321 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.321 06:58:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # IFS== 00:05:23.321 06:58:07 -- accel/accel.sh@64 -- # read -r opc module 00:05:23.321 06:58:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:23.321 06:58:07 -- accel/accel.sh@67 -- # killprocess 58204 00:05:23.321 06:58:07 -- common/autotest_common.sh@926 -- # '[' -z 58204 ']' 00:05:23.321 06:58:07 -- common/autotest_common.sh@930 -- # kill -0 58204 00:05:23.321 06:58:07 -- common/autotest_common.sh@931 -- # uname 00:05:23.321 06:58:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:23.321 06:58:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58204 00:05:23.321 06:58:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:23.321 06:58:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:23.321 killing process with pid 58204 00:05:23.321 06:58:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58204' 00:05:23.321 06:58:07 -- common/autotest_common.sh@945 -- # kill 58204 00:05:23.321 06:58:07 -- common/autotest_common.sh@950 -- # wait 58204 00:05:23.887 06:58:07 -- accel/accel.sh@68 -- # trap - ERR 00:05:23.887 06:58:07 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:23.887 06:58:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:23.887 06:58:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.887 06:58:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.887 06:58:07 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:23.887 06:58:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:23.887 06:58:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.887 06:58:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:23.887 06:58:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.887 06:58:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.887 06:58:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:23.887 06:58:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:23.887 06:58:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:23.887 06:58:07 -- accel/accel.sh@42 -- # jq -r . 00:05:23.888 06:58:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.888 06:58:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.888 06:58:07 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:23.888 06:58:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:23.888 06:58:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.888 06:58:07 -- common/autotest_common.sh@10 -- # set +x 00:05:23.888 ************************************ 00:05:23.888 START TEST accel_missing_filename 00:05:23.888 ************************************ 00:05:23.888 06:58:07 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:23.888 06:58:07 -- common/autotest_common.sh@640 -- # local es=0 00:05:23.888 06:58:07 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:23.888 06:58:07 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:23.888 06:58:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:23.888 06:58:07 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:23.888 06:58:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:23.888 06:58:07 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:23.888 06:58:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:23.888 06:58:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.888 06:58:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:23.888 06:58:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.888 06:58:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.888 06:58:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:23.888 06:58:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:23.888 06:58:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:23.888 06:58:07 -- accel/accel.sh@42 -- # jq -r . 00:05:23.888 [2024-07-11 06:58:07.869375] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:23.888 [2024-07-11 06:58:07.869473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58274 ] 00:05:24.161 [2024-07-11 06:58:08.001072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.161 [2024-07-11 06:58:08.124894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.161 [2024-07-11 06:58:08.178677] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:24.455 [2024-07-11 06:58:08.255973] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:24.455 A filename is required. 00:05:24.455 06:58:08 -- common/autotest_common.sh@643 -- # es=234 00:05:24.455 06:58:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:24.455 06:58:08 -- common/autotest_common.sh@652 -- # es=106 00:05:24.455 06:58:08 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:24.455 06:58:08 -- common/autotest_common.sh@660 -- # es=1 00:05:24.455 06:58:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:24.455 00:05:24.455 real 0m0.517s 00:05:24.455 user 0m0.343s 00:05:24.455 sys 0m0.107s 00:05:24.455 06:58:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.455 ************************************ 00:05:24.455 END TEST accel_missing_filename 00:05:24.455 ************************************ 00:05:24.455 06:58:08 -- common/autotest_common.sh@10 -- # set +x 00:05:24.455 06:58:08 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:24.455 06:58:08 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:24.455 06:58:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.455 06:58:08 -- common/autotest_common.sh@10 -- # set +x 00:05:24.455 ************************************ 00:05:24.455 START TEST accel_compress_verify 00:05:24.455 ************************************ 00:05:24.455 06:58:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:24.455 06:58:08 -- common/autotest_common.sh@640 -- # local es=0 00:05:24.455 06:58:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:24.455 06:58:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:24.455 06:58:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:24.455 06:58:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:24.455 06:58:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:24.455 06:58:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:24.455 06:58:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:24.455 06:58:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.455 06:58:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:24.455 06:58:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.455 06:58:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.455 06:58:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:24.455 06:58:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:24.455 06:58:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:24.455 06:58:08 -- accel/accel.sh@42 -- # jq -r . 00:05:24.455 [2024-07-11 06:58:08.440206] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:24.455 [2024-07-11 06:58:08.440296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58298 ] 00:05:24.714 [2024-07-11 06:58:08.568404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.714 [2024-07-11 06:58:08.639493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.714 [2024-07-11 06:58:08.694648] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:24.972 [2024-07-11 06:58:08.777037] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:24.972 00:05:24.972 Compression does not support the verify option, aborting. 00:05:24.972 06:58:08 -- common/autotest_common.sh@643 -- # es=161 00:05:24.972 06:58:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:24.972 06:58:08 -- common/autotest_common.sh@652 -- # es=33 00:05:24.972 06:58:08 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:24.972 06:58:08 -- common/autotest_common.sh@660 -- # es=1 00:05:24.972 06:58:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:24.972 00:05:24.972 real 0m0.466s 00:05:24.972 user 0m0.303s 00:05:24.972 sys 0m0.109s 00:05:24.972 06:58:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.972 ************************************ 00:05:24.972 06:58:08 -- common/autotest_common.sh@10 -- # set +x 00:05:24.972 END TEST accel_compress_verify 00:05:24.972 ************************************ 00:05:24.972 06:58:08 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:24.972 06:58:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:24.972 06:58:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.972 06:58:08 -- common/autotest_common.sh@10 -- # set +x 00:05:24.972 ************************************ 00:05:24.972 START TEST accel_wrong_workload 00:05:24.972 ************************************ 00:05:24.972 06:58:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:24.972 06:58:08 -- common/autotest_common.sh@640 -- # local es=0 00:05:24.972 06:58:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:24.972 06:58:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:24.972 06:58:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:24.972 06:58:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:24.972 06:58:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:24.972 06:58:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:24.972 06:58:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:24.972 06:58:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.972 06:58:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:24.972 06:58:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.972 06:58:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.972 06:58:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:24.972 06:58:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:24.972 06:58:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:24.972 06:58:08 -- accel/accel.sh@42 -- # jq -r . 00:05:24.972 Unsupported workload type: foobar 00:05:24.972 [2024-07-11 06:58:08.959671] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:24.972 accel_perf options: 00:05:24.973 [-h help message] 00:05:24.973 [-q queue depth per core] 00:05:24.973 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:24.973 [-T number of threads per core 00:05:24.973 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:24.973 [-t time in seconds] 00:05:24.973 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:24.973 [ dif_verify, , dif_generate, dif_generate_copy 00:05:24.973 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:24.973 [-l for compress/decompress workloads, name of uncompressed input file 00:05:24.973 [-S for crc32c workload, use this seed value (default 0) 00:05:24.973 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:24.973 [-f for fill workload, use this BYTE value (default 255) 00:05:24.973 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:24.973 [-y verify result if this switch is on] 00:05:24.973 [-a tasks to allocate per core (default: same value as -q)] 00:05:24.973 Can be used to spread operations across a wider range of memory. 00:05:24.973 06:58:08 -- common/autotest_common.sh@643 -- # es=1 00:05:24.973 06:58:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:24.973 06:58:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:24.973 06:58:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:24.973 00:05:24.973 real 0m0.032s 00:05:24.973 user 0m0.019s 00:05:24.973 sys 0m0.012s 00:05:24.973 06:58:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.973 06:58:08 -- common/autotest_common.sh@10 -- # set +x 00:05:24.973 ************************************ 00:05:24.973 END TEST accel_wrong_workload 00:05:24.973 ************************************ 00:05:24.973 06:58:09 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:24.973 06:58:09 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:24.973 06:58:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.973 06:58:09 -- common/autotest_common.sh@10 -- # set +x 00:05:24.973 ************************************ 00:05:24.973 START TEST accel_negative_buffers 00:05:24.973 ************************************ 00:05:24.973 06:58:09 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:24.973 06:58:09 -- common/autotest_common.sh@640 -- # local es=0 00:05:24.973 06:58:09 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:24.973 06:58:09 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:24.973 06:58:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:24.973 06:58:09 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:24.973 06:58:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:24.973 06:58:09 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:24.973 06:58:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:24.973 06:58:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.973 06:58:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:24.973 06:58:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.973 06:58:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.973 06:58:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:24.973 06:58:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:24.973 06:58:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:24.973 06:58:09 -- accel/accel.sh@42 -- # jq -r . 00:05:25.232 -x option must be non-negative. 00:05:25.232 [2024-07-11 06:58:09.040307] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:25.232 accel_perf options: 00:05:25.232 [-h help message] 00:05:25.232 [-q queue depth per core] 00:05:25.232 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:25.232 [-T number of threads per core 00:05:25.232 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:25.232 [-t time in seconds] 00:05:25.232 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:25.232 [ dif_verify, , dif_generate, dif_generate_copy 00:05:25.232 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:25.232 [-l for compress/decompress workloads, name of uncompressed input file 00:05:25.232 [-S for crc32c workload, use this seed value (default 0) 00:05:25.232 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:25.232 [-f for fill workload, use this BYTE value (default 255) 00:05:25.232 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:25.232 [-y verify result if this switch is on] 00:05:25.232 [-a tasks to allocate per core (default: same value as -q)] 00:05:25.232 Can be used to spread operations across a wider range of memory. 00:05:25.232 06:58:09 -- common/autotest_common.sh@643 -- # es=1 00:05:25.232 06:58:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:25.232 06:58:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:25.232 06:58:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:25.232 00:05:25.232 real 0m0.032s 00:05:25.232 user 0m0.019s 00:05:25.232 sys 0m0.011s 00:05:25.232 06:58:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.232 06:58:09 -- common/autotest_common.sh@10 -- # set +x 00:05:25.232 ************************************ 00:05:25.232 END TEST accel_negative_buffers 00:05:25.232 ************************************ 00:05:25.232 06:58:09 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:25.232 06:58:09 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:25.232 06:58:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.232 06:58:09 -- common/autotest_common.sh@10 -- # set +x 00:05:25.232 ************************************ 00:05:25.232 START TEST accel_crc32c 00:05:25.232 ************************************ 00:05:25.232 06:58:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:25.232 06:58:09 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.232 06:58:09 -- accel/accel.sh@17 -- # local accel_module 00:05:25.232 06:58:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:25.232 06:58:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:25.232 06:58:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.232 06:58:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.232 06:58:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.232 06:58:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.232 06:58:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.232 06:58:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.232 06:58:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.232 06:58:09 -- accel/accel.sh@42 -- # jq -r . 00:05:25.232 [2024-07-11 06:58:09.116822] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:25.232 [2024-07-11 06:58:09.116897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58357 ] 00:05:25.232 [2024-07-11 06:58:09.252127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.490 [2024-07-11 06:58:09.329251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.866 06:58:10 -- accel/accel.sh@18 -- # out=' 00:05:26.866 SPDK Configuration: 00:05:26.866 Core mask: 0x1 00:05:26.866 00:05:26.866 Accel Perf Configuration: 00:05:26.866 Workload Type: crc32c 00:05:26.866 CRC-32C seed: 32 00:05:26.866 Transfer size: 4096 bytes 00:05:26.866 Vector count 1 00:05:26.866 Module: software 00:05:26.866 Queue depth: 32 00:05:26.866 Allocate depth: 32 00:05:26.866 # threads/core: 1 00:05:26.866 Run time: 1 seconds 00:05:26.866 Verify: Yes 00:05:26.866 00:05:26.866 Running for 1 seconds... 00:05:26.866 00:05:26.866 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:26.866 ------------------------------------------------------------------------------------ 00:05:26.866 0,0 507520/s 1982 MiB/s 0 0 00:05:26.866 ==================================================================================== 00:05:26.866 Total 507520/s 1982 MiB/s 0 0' 00:05:26.866 06:58:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:26.866 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.866 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.866 06:58:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:26.866 06:58:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.866 06:58:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:26.866 06:58:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.866 06:58:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.866 06:58:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:26.866 06:58:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:26.866 06:58:10 -- accel/accel.sh@41 -- # local IFS=, 00:05:26.866 06:58:10 -- accel/accel.sh@42 -- # jq -r . 00:05:26.866 [2024-07-11 06:58:10.579977] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:26.866 [2024-07-11 06:58:10.580044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58376 ] 00:05:26.866 [2024-07-11 06:58:10.708901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.866 [2024-07-11 06:58:10.798803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.866 06:58:10 -- accel/accel.sh@21 -- # val= 00:05:26.866 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.866 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.866 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.866 06:58:10 -- accel/accel.sh@21 -- # val= 00:05:26.866 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.866 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val=0x1 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val= 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val= 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val=crc32c 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val=32 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val= 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val=software 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@23 -- # accel_module=software 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val=32 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val=32 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val=1 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val=Yes 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val= 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:26.867 06:58:10 -- accel/accel.sh@21 -- # val= 00:05:26.867 06:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # IFS=: 00:05:26.867 06:58:10 -- accel/accel.sh@20 -- # read -r var val 00:05:28.243 06:58:12 -- accel/accel.sh@21 -- # val= 00:05:28.243 06:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.243 06:58:12 -- accel/accel.sh@20 -- # IFS=: 00:05:28.243 06:58:12 -- accel/accel.sh@20 -- # read -r var val 00:05:28.243 06:58:12 -- accel/accel.sh@21 -- # val= 00:05:28.243 06:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.243 06:58:12 -- accel/accel.sh@20 -- # IFS=: 00:05:28.243 06:58:12 -- accel/accel.sh@20 -- # read -r var val 00:05:28.243 06:58:12 -- accel/accel.sh@21 -- # val= 00:05:28.243 06:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.243 06:58:12 -- accel/accel.sh@20 -- # IFS=: 00:05:28.243 06:58:12 -- accel/accel.sh@20 -- # read -r var val 00:05:28.243 06:58:12 -- accel/accel.sh@21 -- # val= 00:05:28.243 06:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.243 06:58:12 -- accel/accel.sh@20 -- # IFS=: 00:05:28.243 06:58:12 -- accel/accel.sh@20 -- # read -r var val 00:05:28.243 06:58:12 -- accel/accel.sh@21 -- # val= 00:05:28.243 06:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.243 06:58:12 -- accel/accel.sh@20 -- # IFS=: 00:05:28.243 06:58:12 -- accel/accel.sh@20 -- # read -r var val 00:05:28.243 ************************************ 00:05:28.243 END TEST accel_crc32c 00:05:28.243 ************************************ 00:05:28.243 06:58:12 -- accel/accel.sh@21 -- # val= 00:05:28.243 06:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.243 06:58:12 -- accel/accel.sh@20 -- # IFS=: 00:05:28.243 06:58:12 -- accel/accel.sh@20 -- # read -r var val 00:05:28.243 06:58:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:28.243 06:58:12 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:28.243 06:58:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.243 00:05:28.243 real 0m2.934s 00:05:28.243 user 0m2.512s 00:05:28.243 sys 0m0.220s 00:05:28.243 06:58:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.243 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:28.243 06:58:12 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:28.243 06:58:12 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:28.243 06:58:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.243 06:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:28.243 ************************************ 00:05:28.243 START TEST accel_crc32c_C2 00:05:28.243 ************************************ 00:05:28.243 06:58:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:28.243 06:58:12 -- accel/accel.sh@16 -- # local accel_opc 00:05:28.243 06:58:12 -- accel/accel.sh@17 -- # local accel_module 00:05:28.243 06:58:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:28.243 06:58:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:28.243 06:58:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.243 06:58:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:28.243 06:58:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.243 06:58:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.243 06:58:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:28.243 06:58:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:28.243 06:58:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:28.243 06:58:12 -- accel/accel.sh@42 -- # jq -r . 00:05:28.243 [2024-07-11 06:58:12.103977] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:28.243 [2024-07-11 06:58:12.104065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58411 ] 00:05:28.243 [2024-07-11 06:58:12.242657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.502 [2024-07-11 06:58:12.332858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.878 06:58:13 -- accel/accel.sh@18 -- # out=' 00:05:29.878 SPDK Configuration: 00:05:29.878 Core mask: 0x1 00:05:29.878 00:05:29.878 Accel Perf Configuration: 00:05:29.878 Workload Type: crc32c 00:05:29.878 CRC-32C seed: 0 00:05:29.878 Transfer size: 4096 bytes 00:05:29.878 Vector count 2 00:05:29.878 Module: software 00:05:29.878 Queue depth: 32 00:05:29.878 Allocate depth: 32 00:05:29.878 # threads/core: 1 00:05:29.878 Run time: 1 seconds 00:05:29.878 Verify: Yes 00:05:29.878 00:05:29.878 Running for 1 seconds... 00:05:29.878 00:05:29.878 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:29.878 ------------------------------------------------------------------------------------ 00:05:29.878 0,0 368448/s 2878 MiB/s 0 0 00:05:29.878 ==================================================================================== 00:05:29.878 Total 368448/s 1439 MiB/s 0 0' 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:29.878 06:58:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:29.878 06:58:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.878 06:58:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:29.878 06:58:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.878 06:58:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.878 06:58:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:29.878 06:58:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:29.878 06:58:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:29.878 06:58:13 -- accel/accel.sh@42 -- # jq -r . 00:05:29.878 [2024-07-11 06:58:13.588836] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:29.878 [2024-07-11 06:58:13.588924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58430 ] 00:05:29.878 [2024-07-11 06:58:13.718598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.878 [2024-07-11 06:58:13.821647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val= 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val= 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val=0x1 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val= 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val= 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val=crc32c 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val=0 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val= 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val=software 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@23 -- # accel_module=software 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val=32 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val=32 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val=1 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.878 06:58:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:29.878 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.878 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.879 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.879 06:58:13 -- accel/accel.sh@21 -- # val=Yes 00:05:29.879 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.879 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.879 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.879 06:58:13 -- accel/accel.sh@21 -- # val= 00:05:29.879 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.879 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.879 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:29.879 06:58:13 -- accel/accel.sh@21 -- # val= 00:05:29.879 06:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.879 06:58:13 -- accel/accel.sh@20 -- # IFS=: 00:05:29.879 06:58:13 -- accel/accel.sh@20 -- # read -r var val 00:05:31.255 06:58:15 -- accel/accel.sh@21 -- # val= 00:05:31.255 06:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.255 06:58:15 -- accel/accel.sh@20 -- # IFS=: 00:05:31.255 06:58:15 -- accel/accel.sh@20 -- # read -r var val 00:05:31.255 06:58:15 -- accel/accel.sh@21 -- # val= 00:05:31.255 06:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.255 06:58:15 -- accel/accel.sh@20 -- # IFS=: 00:05:31.255 06:58:15 -- accel/accel.sh@20 -- # read -r var val 00:05:31.255 06:58:15 -- accel/accel.sh@21 -- # val= 00:05:31.255 06:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.255 06:58:15 -- accel/accel.sh@20 -- # IFS=: 00:05:31.255 06:58:15 -- accel/accel.sh@20 -- # read -r var val 00:05:31.255 06:58:15 -- accel/accel.sh@21 -- # val= 00:05:31.255 06:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.255 06:58:15 -- accel/accel.sh@20 -- # IFS=: 00:05:31.255 06:58:15 -- accel/accel.sh@20 -- # read -r var val 00:05:31.255 06:58:15 -- accel/accel.sh@21 -- # val= 00:05:31.255 06:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.255 06:58:15 -- accel/accel.sh@20 -- # IFS=: 00:05:31.255 06:58:15 -- accel/accel.sh@20 -- # read -r var val 00:05:31.255 06:58:15 -- accel/accel.sh@21 -- # val= 00:05:31.255 06:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.255 06:58:15 -- accel/accel.sh@20 -- # IFS=: 00:05:31.255 ************************************ 00:05:31.255 END TEST accel_crc32c_C2 00:05:31.255 06:58:15 -- accel/accel.sh@20 -- # read -r var val 00:05:31.255 06:58:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:31.255 06:58:15 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:31.255 06:58:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.255 00:05:31.255 real 0m2.986s 00:05:31.255 user 0m2.575s 00:05:31.255 sys 0m0.206s 00:05:31.255 06:58:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.255 06:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.255 ************************************ 00:05:31.255 06:58:15 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:31.255 06:58:15 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:31.255 06:58:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.255 06:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:31.255 ************************************ 00:05:31.255 START TEST accel_copy 00:05:31.255 ************************************ 00:05:31.255 06:58:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:31.255 06:58:15 -- accel/accel.sh@16 -- # local accel_opc 00:05:31.255 06:58:15 -- accel/accel.sh@17 -- # local accel_module 00:05:31.255 06:58:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:31.255 06:58:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:31.255 06:58:15 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.255 06:58:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:31.255 06:58:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.255 06:58:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.255 06:58:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:31.255 06:58:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:31.255 06:58:15 -- accel/accel.sh@41 -- # local IFS=, 00:05:31.255 06:58:15 -- accel/accel.sh@42 -- # jq -r . 00:05:31.255 [2024-07-11 06:58:15.141063] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:31.255 [2024-07-11 06:58:15.141182] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58465 ] 00:05:31.255 [2024-07-11 06:58:15.287622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.514 [2024-07-11 06:58:15.388462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.891 06:58:16 -- accel/accel.sh@18 -- # out=' 00:05:32.891 SPDK Configuration: 00:05:32.891 Core mask: 0x1 00:05:32.891 00:05:32.891 Accel Perf Configuration: 00:05:32.892 Workload Type: copy 00:05:32.892 Transfer size: 4096 bytes 00:05:32.892 Vector count 1 00:05:32.892 Module: software 00:05:32.892 Queue depth: 32 00:05:32.892 Allocate depth: 32 00:05:32.892 # threads/core: 1 00:05:32.892 Run time: 1 seconds 00:05:32.892 Verify: Yes 00:05:32.892 00:05:32.892 Running for 1 seconds... 00:05:32.892 00:05:32.892 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:32.892 ------------------------------------------------------------------------------------ 00:05:32.892 0,0 342880/s 1339 MiB/s 0 0 00:05:32.892 ==================================================================================== 00:05:32.892 Total 342880/s 1339 MiB/s 0 0' 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:32.892 06:58:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.892 06:58:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:32.892 06:58:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.892 06:58:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.892 06:58:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.892 06:58:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.892 06:58:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.892 06:58:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.892 06:58:16 -- accel/accel.sh@42 -- # jq -r . 00:05:32.892 [2024-07-11 06:58:16.643816] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:32.892 [2024-07-11 06:58:16.643900] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58484 ] 00:05:32.892 [2024-07-11 06:58:16.778428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.892 [2024-07-11 06:58:16.851012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val= 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val= 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val=0x1 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val= 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val= 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val=copy 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val= 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val=software 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@23 -- # accel_module=software 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val=32 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val=32 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val=1 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val=Yes 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val= 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:32.892 06:58:16 -- accel/accel.sh@21 -- # val= 00:05:32.892 06:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # IFS=: 00:05:32.892 06:58:16 -- accel/accel.sh@20 -- # read -r var val 00:05:34.268 06:58:18 -- accel/accel.sh@21 -- # val= 00:05:34.268 06:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.268 06:58:18 -- accel/accel.sh@20 -- # IFS=: 00:05:34.268 06:58:18 -- accel/accel.sh@20 -- # read -r var val 00:05:34.268 06:58:18 -- accel/accel.sh@21 -- # val= 00:05:34.268 06:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.268 06:58:18 -- accel/accel.sh@20 -- # IFS=: 00:05:34.268 06:58:18 -- accel/accel.sh@20 -- # read -r var val 00:05:34.268 06:58:18 -- accel/accel.sh@21 -- # val= 00:05:34.268 06:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.268 06:58:18 -- accel/accel.sh@20 -- # IFS=: 00:05:34.268 06:58:18 -- accel/accel.sh@20 -- # read -r var val 00:05:34.268 06:58:18 -- accel/accel.sh@21 -- # val= 00:05:34.268 06:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.268 06:58:18 -- accel/accel.sh@20 -- # IFS=: 00:05:34.268 06:58:18 -- accel/accel.sh@20 -- # read -r var val 00:05:34.268 06:58:18 -- accel/accel.sh@21 -- # val= 00:05:34.268 06:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.268 06:58:18 -- accel/accel.sh@20 -- # IFS=: 00:05:34.268 06:58:18 -- accel/accel.sh@20 -- # read -r var val 00:05:34.268 06:58:18 -- accel/accel.sh@21 -- # val= 00:05:34.268 06:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.268 06:58:18 -- accel/accel.sh@20 -- # IFS=: 00:05:34.268 06:58:18 -- accel/accel.sh@20 -- # read -r var val 00:05:34.268 06:58:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:34.268 06:58:18 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:34.268 06:58:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.268 00:05:34.268 real 0m2.969s 00:05:34.268 user 0m2.542s 00:05:34.268 sys 0m0.225s 00:05:34.268 06:58:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.268 ************************************ 00:05:34.268 END TEST accel_copy 00:05:34.268 ************************************ 00:05:34.268 06:58:18 -- common/autotest_common.sh@10 -- # set +x 00:05:34.268 06:58:18 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:34.268 06:58:18 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:34.268 06:58:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.268 06:58:18 -- common/autotest_common.sh@10 -- # set +x 00:05:34.268 ************************************ 00:05:34.268 START TEST accel_fill 00:05:34.268 ************************************ 00:05:34.268 06:58:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:34.268 06:58:18 -- accel/accel.sh@16 -- # local accel_opc 00:05:34.268 06:58:18 -- accel/accel.sh@17 -- # local accel_module 00:05:34.268 06:58:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:34.268 06:58:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:34.268 06:58:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.268 06:58:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.268 06:58:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.268 06:58:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.268 06:58:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.268 06:58:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.268 06:58:18 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.268 06:58:18 -- accel/accel.sh@42 -- # jq -r . 00:05:34.268 [2024-07-11 06:58:18.158115] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:34.268 [2024-07-11 06:58:18.158209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58519 ] 00:05:34.268 [2024-07-11 06:58:18.295062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.532 [2024-07-11 06:58:18.367928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.920 06:58:19 -- accel/accel.sh@18 -- # out=' 00:05:35.920 SPDK Configuration: 00:05:35.920 Core mask: 0x1 00:05:35.920 00:05:35.920 Accel Perf Configuration: 00:05:35.920 Workload Type: fill 00:05:35.920 Fill pattern: 0x80 00:05:35.920 Transfer size: 4096 bytes 00:05:35.920 Vector count 1 00:05:35.920 Module: software 00:05:35.920 Queue depth: 64 00:05:35.920 Allocate depth: 64 00:05:35.920 # threads/core: 1 00:05:35.920 Run time: 1 seconds 00:05:35.920 Verify: Yes 00:05:35.920 00:05:35.920 Running for 1 seconds... 00:05:35.920 00:05:35.920 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:35.920 ------------------------------------------------------------------------------------ 00:05:35.920 0,0 519552/s 2029 MiB/s 0 0 00:05:35.920 ==================================================================================== 00:05:35.920 Total 519552/s 2029 MiB/s 0 0' 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:35.920 06:58:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:35.920 06:58:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.920 06:58:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:35.920 06:58:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.920 06:58:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.920 06:58:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:35.920 06:58:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:35.920 06:58:19 -- accel/accel.sh@41 -- # local IFS=, 00:05:35.920 06:58:19 -- accel/accel.sh@42 -- # jq -r . 00:05:35.920 [2024-07-11 06:58:19.624116] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:35.920 [2024-07-11 06:58:19.624197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58534 ] 00:05:35.920 [2024-07-11 06:58:19.761389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.920 [2024-07-11 06:58:19.841036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val= 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val= 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val=0x1 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val= 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val= 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val=fill 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val=0x80 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val= 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val=software 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@23 -- # accel_module=software 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val=64 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val=64 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val=1 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val=Yes 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val= 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:35.920 06:58:19 -- accel/accel.sh@21 -- # val= 00:05:35.920 06:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # IFS=: 00:05:35.920 06:58:19 -- accel/accel.sh@20 -- # read -r var val 00:05:37.296 06:58:21 -- accel/accel.sh@21 -- # val= 00:05:37.296 06:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.296 06:58:21 -- accel/accel.sh@20 -- # IFS=: 00:05:37.296 06:58:21 -- accel/accel.sh@20 -- # read -r var val 00:05:37.296 06:58:21 -- accel/accel.sh@21 -- # val= 00:05:37.297 06:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.297 06:58:21 -- accel/accel.sh@20 -- # IFS=: 00:05:37.297 06:58:21 -- accel/accel.sh@20 -- # read -r var val 00:05:37.297 06:58:21 -- accel/accel.sh@21 -- # val= 00:05:37.297 06:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.297 06:58:21 -- accel/accel.sh@20 -- # IFS=: 00:05:37.297 06:58:21 -- accel/accel.sh@20 -- # read -r var val 00:05:37.297 06:58:21 -- accel/accel.sh@21 -- # val= 00:05:37.297 06:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.297 06:58:21 -- accel/accel.sh@20 -- # IFS=: 00:05:37.297 06:58:21 -- accel/accel.sh@20 -- # read -r var val 00:05:37.297 ************************************ 00:05:37.297 END TEST accel_fill 00:05:37.297 ************************************ 00:05:37.297 06:58:21 -- accel/accel.sh@21 -- # val= 00:05:37.297 06:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.297 06:58:21 -- accel/accel.sh@20 -- # IFS=: 00:05:37.297 06:58:21 -- accel/accel.sh@20 -- # read -r var val 00:05:37.297 06:58:21 -- accel/accel.sh@21 -- # val= 00:05:37.297 06:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.297 06:58:21 -- accel/accel.sh@20 -- # IFS=: 00:05:37.297 06:58:21 -- accel/accel.sh@20 -- # read -r var val 00:05:37.297 06:58:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:37.297 06:58:21 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:37.297 06:58:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.297 00:05:37.297 real 0m2.949s 00:05:37.297 user 0m2.530s 00:05:37.297 sys 0m0.215s 00:05:37.297 06:58:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.297 06:58:21 -- common/autotest_common.sh@10 -- # set +x 00:05:37.297 06:58:21 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:37.297 06:58:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:37.297 06:58:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.297 06:58:21 -- common/autotest_common.sh@10 -- # set +x 00:05:37.297 ************************************ 00:05:37.297 START TEST accel_copy_crc32c 00:05:37.297 ************************************ 00:05:37.297 06:58:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:05:37.297 06:58:21 -- accel/accel.sh@16 -- # local accel_opc 00:05:37.297 06:58:21 -- accel/accel.sh@17 -- # local accel_module 00:05:37.297 06:58:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:37.297 06:58:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:37.297 06:58:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.297 06:58:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:37.297 06:58:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.297 06:58:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.297 06:58:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:37.297 06:58:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:37.297 06:58:21 -- accel/accel.sh@41 -- # local IFS=, 00:05:37.297 06:58:21 -- accel/accel.sh@42 -- # jq -r . 00:05:37.297 [2024-07-11 06:58:21.156782] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:37.297 [2024-07-11 06:58:21.156895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58573 ] 00:05:37.297 [2024-07-11 06:58:21.286690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.555 [2024-07-11 06:58:21.383640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.931 06:58:22 -- accel/accel.sh@18 -- # out=' 00:05:38.931 SPDK Configuration: 00:05:38.931 Core mask: 0x1 00:05:38.931 00:05:38.931 Accel Perf Configuration: 00:05:38.931 Workload Type: copy_crc32c 00:05:38.931 CRC-32C seed: 0 00:05:38.931 Vector size: 4096 bytes 00:05:38.931 Transfer size: 4096 bytes 00:05:38.931 Vector count 1 00:05:38.931 Module: software 00:05:38.931 Queue depth: 32 00:05:38.931 Allocate depth: 32 00:05:38.931 # threads/core: 1 00:05:38.931 Run time: 1 seconds 00:05:38.931 Verify: Yes 00:05:38.931 00:05:38.931 Running for 1 seconds... 00:05:38.931 00:05:38.931 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:38.932 ------------------------------------------------------------------------------------ 00:05:38.932 0,0 281824/s 1100 MiB/s 0 0 00:05:38.932 ==================================================================================== 00:05:38.932 Total 281824/s 1100 MiB/s 0 0' 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:38.932 06:58:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.932 06:58:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:38.932 06:58:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.932 06:58:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.932 06:58:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:38.932 06:58:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:38.932 06:58:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:38.932 06:58:22 -- accel/accel.sh@42 -- # jq -r . 00:05:38.932 [2024-07-11 06:58:22.651684] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:38.932 [2024-07-11 06:58:22.651775] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58587 ] 00:05:38.932 [2024-07-11 06:58:22.790615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.932 [2024-07-11 06:58:22.896246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val= 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val= 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val=0x1 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val= 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val= 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val=0 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val= 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val=software 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@23 -- # accel_module=software 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val=32 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val=32 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val=1 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val=Yes 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val= 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:38.932 06:58:22 -- accel/accel.sh@21 -- # val= 00:05:38.932 06:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # IFS=: 00:05:38.932 06:58:22 -- accel/accel.sh@20 -- # read -r var val 00:05:40.307 06:58:24 -- accel/accel.sh@21 -- # val= 00:05:40.307 06:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.307 06:58:24 -- accel/accel.sh@20 -- # IFS=: 00:05:40.307 06:58:24 -- accel/accel.sh@20 -- # read -r var val 00:05:40.307 06:58:24 -- accel/accel.sh@21 -- # val= 00:05:40.307 06:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.307 06:58:24 -- accel/accel.sh@20 -- # IFS=: 00:05:40.307 06:58:24 -- accel/accel.sh@20 -- # read -r var val 00:05:40.307 06:58:24 -- accel/accel.sh@21 -- # val= 00:05:40.307 06:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.307 06:58:24 -- accel/accel.sh@20 -- # IFS=: 00:05:40.307 06:58:24 -- accel/accel.sh@20 -- # read -r var val 00:05:40.307 06:58:24 -- accel/accel.sh@21 -- # val= 00:05:40.307 06:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.307 06:58:24 -- accel/accel.sh@20 -- # IFS=: 00:05:40.307 06:58:24 -- accel/accel.sh@20 -- # read -r var val 00:05:40.307 06:58:24 -- accel/accel.sh@21 -- # val= 00:05:40.307 06:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.307 06:58:24 -- accel/accel.sh@20 -- # IFS=: 00:05:40.307 06:58:24 -- accel/accel.sh@20 -- # read -r var val 00:05:40.307 06:58:24 -- accel/accel.sh@21 -- # val= 00:05:40.307 06:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.307 06:58:24 -- accel/accel.sh@20 -- # IFS=: 00:05:40.307 06:58:24 -- accel/accel.sh@20 -- # read -r var val 00:05:40.307 06:58:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:40.307 06:58:24 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:40.307 06:58:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.307 00:05:40.307 real 0m3.010s 00:05:40.307 user 0m2.589s 00:05:40.307 sys 0m0.221s 00:05:40.307 06:58:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.307 06:58:24 -- common/autotest_common.sh@10 -- # set +x 00:05:40.307 ************************************ 00:05:40.307 END TEST accel_copy_crc32c 00:05:40.307 ************************************ 00:05:40.307 06:58:24 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:40.307 06:58:24 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:40.307 06:58:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.307 06:58:24 -- common/autotest_common.sh@10 -- # set +x 00:05:40.307 ************************************ 00:05:40.307 START TEST accel_copy_crc32c_C2 00:05:40.307 ************************************ 00:05:40.307 06:58:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:40.307 06:58:24 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.307 06:58:24 -- accel/accel.sh@17 -- # local accel_module 00:05:40.307 06:58:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:40.307 06:58:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:40.307 06:58:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.307 06:58:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.307 06:58:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.307 06:58:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.307 06:58:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.308 06:58:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.308 06:58:24 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.308 06:58:24 -- accel/accel.sh@42 -- # jq -r . 00:05:40.308 [2024-07-11 06:58:24.233693] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:40.308 [2024-07-11 06:58:24.233802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58627 ] 00:05:40.565 [2024-07-11 06:58:24.367898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.565 [2024-07-11 06:58:24.472997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.940 06:58:25 -- accel/accel.sh@18 -- # out=' 00:05:41.940 SPDK Configuration: 00:05:41.940 Core mask: 0x1 00:05:41.940 00:05:41.940 Accel Perf Configuration: 00:05:41.940 Workload Type: copy_crc32c 00:05:41.940 CRC-32C seed: 0 00:05:41.940 Vector size: 4096 bytes 00:05:41.940 Transfer size: 8192 bytes 00:05:41.940 Vector count 2 00:05:41.940 Module: software 00:05:41.940 Queue depth: 32 00:05:41.940 Allocate depth: 32 00:05:41.940 # threads/core: 1 00:05:41.940 Run time: 1 seconds 00:05:41.940 Verify: Yes 00:05:41.940 00:05:41.940 Running for 1 seconds... 00:05:41.940 00:05:41.940 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:41.940 ------------------------------------------------------------------------------------ 00:05:41.940 0,0 191552/s 1496 MiB/s 0 0 00:05:41.940 ==================================================================================== 00:05:41.940 Total 191552/s 748 MiB/s 0 0' 00:05:41.940 06:58:25 -- accel/accel.sh@20 -- # IFS=: 00:05:41.940 06:58:25 -- accel/accel.sh@20 -- # read -r var val 00:05:41.940 06:58:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:41.940 06:58:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:41.940 06:58:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.940 06:58:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.940 06:58:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.940 06:58:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.940 06:58:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.940 06:58:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.940 06:58:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.940 06:58:25 -- accel/accel.sh@42 -- # jq -r . 00:05:41.940 [2024-07-11 06:58:25.737200] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:41.940 [2024-07-11 06:58:25.737293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58641 ] 00:05:41.940 [2024-07-11 06:58:25.874602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.940 [2024-07-11 06:58:25.974259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val= 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val= 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val=0x1 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val= 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val= 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val=0 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val= 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val=software 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@23 -- # accel_module=software 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val=32 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val=32 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val=1 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val=Yes 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val= 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:42.199 06:58:26 -- accel/accel.sh@21 -- # val= 00:05:42.199 06:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # IFS=: 00:05:42.199 06:58:26 -- accel/accel.sh@20 -- # read -r var val 00:05:43.574 06:58:27 -- accel/accel.sh@21 -- # val= 00:05:43.574 06:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.574 06:58:27 -- accel/accel.sh@20 -- # IFS=: 00:05:43.574 06:58:27 -- accel/accel.sh@20 -- # read -r var val 00:05:43.574 06:58:27 -- accel/accel.sh@21 -- # val= 00:05:43.574 06:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.574 06:58:27 -- accel/accel.sh@20 -- # IFS=: 00:05:43.574 06:58:27 -- accel/accel.sh@20 -- # read -r var val 00:05:43.574 06:58:27 -- accel/accel.sh@21 -- # val= 00:05:43.574 06:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.574 06:58:27 -- accel/accel.sh@20 -- # IFS=: 00:05:43.574 06:58:27 -- accel/accel.sh@20 -- # read -r var val 00:05:43.574 06:58:27 -- accel/accel.sh@21 -- # val= 00:05:43.574 06:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.574 06:58:27 -- accel/accel.sh@20 -- # IFS=: 00:05:43.574 06:58:27 -- accel/accel.sh@20 -- # read -r var val 00:05:43.574 06:58:27 -- accel/accel.sh@21 -- # val= 00:05:43.574 06:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.574 06:58:27 -- accel/accel.sh@20 -- # IFS=: 00:05:43.574 06:58:27 -- accel/accel.sh@20 -- # read -r var val 00:05:43.574 06:58:27 -- accel/accel.sh@21 -- # val= 00:05:43.574 06:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.574 06:58:27 -- accel/accel.sh@20 -- # IFS=: 00:05:43.574 06:58:27 -- accel/accel.sh@20 -- # read -r var val 00:05:43.574 06:58:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:43.574 06:58:27 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:43.574 06:58:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.574 00:05:43.574 real 0m3.014s 00:05:43.574 user 0m2.567s 00:05:43.574 sys 0m0.244s 00:05:43.574 ************************************ 00:05:43.574 END TEST accel_copy_crc32c_C2 00:05:43.574 ************************************ 00:05:43.574 06:58:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.574 06:58:27 -- common/autotest_common.sh@10 -- # set +x 00:05:43.574 06:58:27 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:43.574 06:58:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:43.574 06:58:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.574 06:58:27 -- common/autotest_common.sh@10 -- # set +x 00:05:43.574 ************************************ 00:05:43.574 START TEST accel_dualcast 00:05:43.574 ************************************ 00:05:43.574 06:58:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:05:43.574 06:58:27 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.574 06:58:27 -- accel/accel.sh@17 -- # local accel_module 00:05:43.574 06:58:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:43.574 06:58:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:43.574 06:58:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.574 06:58:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.574 06:58:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.574 06:58:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.574 06:58:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.574 06:58:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.574 06:58:27 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.574 06:58:27 -- accel/accel.sh@42 -- # jq -r . 00:05:43.574 [2024-07-11 06:58:27.297045] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:43.574 [2024-07-11 06:58:27.297150] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58681 ] 00:05:43.574 [2024-07-11 06:58:27.434916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.574 [2024-07-11 06:58:27.543034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.011 06:58:28 -- accel/accel.sh@18 -- # out=' 00:05:45.011 SPDK Configuration: 00:05:45.011 Core mask: 0x1 00:05:45.011 00:05:45.011 Accel Perf Configuration: 00:05:45.011 Workload Type: dualcast 00:05:45.011 Transfer size: 4096 bytes 00:05:45.011 Vector count 1 00:05:45.011 Module: software 00:05:45.011 Queue depth: 32 00:05:45.011 Allocate depth: 32 00:05:45.011 # threads/core: 1 00:05:45.011 Run time: 1 seconds 00:05:45.011 Verify: Yes 00:05:45.011 00:05:45.011 Running for 1 seconds... 00:05:45.011 00:05:45.011 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:45.011 ------------------------------------------------------------------------------------ 00:05:45.011 0,0 379008/s 1480 MiB/s 0 0 00:05:45.011 ==================================================================================== 00:05:45.011 Total 379008/s 1480 MiB/s 0 0' 00:05:45.011 06:58:28 -- accel/accel.sh@20 -- # IFS=: 00:05:45.011 06:58:28 -- accel/accel.sh@20 -- # read -r var val 00:05:45.011 06:58:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:45.011 06:58:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.011 06:58:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:45.011 06:58:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.011 06:58:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.011 06:58:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.011 06:58:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.011 06:58:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.011 06:58:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.011 06:58:28 -- accel/accel.sh@42 -- # jq -r . 00:05:45.011 [2024-07-11 06:58:28.810119] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:45.011 [2024-07-11 06:58:28.810215] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58695 ] 00:05:45.011 [2024-07-11 06:58:28.943114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.011 [2024-07-11 06:58:29.034460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.270 06:58:29 -- accel/accel.sh@21 -- # val= 00:05:45.270 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.270 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.270 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val= 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val=0x1 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val= 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val= 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val=dualcast 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val= 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val=software 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@23 -- # accel_module=software 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val=32 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val=32 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val=1 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val=Yes 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val= 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:45.271 06:58:29 -- accel/accel.sh@21 -- # val= 00:05:45.271 06:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # IFS=: 00:05:45.271 06:58:29 -- accel/accel.sh@20 -- # read -r var val 00:05:46.647 06:58:30 -- accel/accel.sh@21 -- # val= 00:05:46.647 06:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.647 06:58:30 -- accel/accel.sh@20 -- # IFS=: 00:05:46.647 06:58:30 -- accel/accel.sh@20 -- # read -r var val 00:05:46.647 06:58:30 -- accel/accel.sh@21 -- # val= 00:05:46.647 06:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.647 06:58:30 -- accel/accel.sh@20 -- # IFS=: 00:05:46.647 06:58:30 -- accel/accel.sh@20 -- # read -r var val 00:05:46.647 06:58:30 -- accel/accel.sh@21 -- # val= 00:05:46.647 06:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.647 06:58:30 -- accel/accel.sh@20 -- # IFS=: 00:05:46.647 06:58:30 -- accel/accel.sh@20 -- # read -r var val 00:05:46.647 06:58:30 -- accel/accel.sh@21 -- # val= 00:05:46.647 06:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.647 06:58:30 -- accel/accel.sh@20 -- # IFS=: 00:05:46.647 06:58:30 -- accel/accel.sh@20 -- # read -r var val 00:05:46.647 06:58:30 -- accel/accel.sh@21 -- # val= 00:05:46.647 06:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.647 06:58:30 -- accel/accel.sh@20 -- # IFS=: 00:05:46.647 06:58:30 -- accel/accel.sh@20 -- # read -r var val 00:05:46.647 06:58:30 -- accel/accel.sh@21 -- # val= 00:05:46.647 06:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.647 06:58:30 -- accel/accel.sh@20 -- # IFS=: 00:05:46.647 06:58:30 -- accel/accel.sh@20 -- # read -r var val 00:05:46.647 06:58:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:46.647 06:58:30 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:46.647 ************************************ 00:05:46.647 END TEST accel_dualcast 00:05:46.647 ************************************ 00:05:46.647 06:58:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.647 00:05:46.647 real 0m3.004s 00:05:46.647 user 0m2.569s 00:05:46.647 sys 0m0.231s 00:05:46.647 06:58:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.647 06:58:30 -- common/autotest_common.sh@10 -- # set +x 00:05:46.647 06:58:30 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:46.647 06:58:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:46.647 06:58:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.647 06:58:30 -- common/autotest_common.sh@10 -- # set +x 00:05:46.647 ************************************ 00:05:46.647 START TEST accel_compare 00:05:46.647 ************************************ 00:05:46.647 06:58:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:05:46.647 06:58:30 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.647 06:58:30 -- accel/accel.sh@17 -- # local accel_module 00:05:46.647 06:58:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:46.647 06:58:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.647 06:58:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:46.647 06:58:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.647 06:58:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.647 06:58:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.647 06:58:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.647 06:58:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.647 06:58:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.647 06:58:30 -- accel/accel.sh@42 -- # jq -r . 00:05:46.647 [2024-07-11 06:58:30.352795] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:46.647 [2024-07-11 06:58:30.352884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58735 ] 00:05:46.647 [2024-07-11 06:58:30.491072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.647 [2024-07-11 06:58:30.580791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.023 06:58:31 -- accel/accel.sh@18 -- # out=' 00:05:48.023 SPDK Configuration: 00:05:48.023 Core mask: 0x1 00:05:48.023 00:05:48.023 Accel Perf Configuration: 00:05:48.023 Workload Type: compare 00:05:48.023 Transfer size: 4096 bytes 00:05:48.023 Vector count 1 00:05:48.023 Module: software 00:05:48.023 Queue depth: 32 00:05:48.023 Allocate depth: 32 00:05:48.023 # threads/core: 1 00:05:48.023 Run time: 1 seconds 00:05:48.023 Verify: Yes 00:05:48.023 00:05:48.023 Running for 1 seconds... 00:05:48.023 00:05:48.023 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:48.023 ------------------------------------------------------------------------------------ 00:05:48.023 0,0 487872/s 1905 MiB/s 0 0 00:05:48.023 ==================================================================================== 00:05:48.023 Total 487872/s 1905 MiB/s 0 0' 00:05:48.023 06:58:31 -- accel/accel.sh@20 -- # IFS=: 00:05:48.023 06:58:31 -- accel/accel.sh@20 -- # read -r var val 00:05:48.023 06:58:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:48.023 06:58:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:48.023 06:58:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.023 06:58:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.023 06:58:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.023 06:58:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.023 06:58:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.023 06:58:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.023 06:58:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.023 06:58:31 -- accel/accel.sh@42 -- # jq -r . 00:05:48.023 [2024-07-11 06:58:31.837726] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:48.023 [2024-07-11 06:58:31.837827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58749 ] 00:05:48.023 [2024-07-11 06:58:31.969708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.023 [2024-07-11 06:58:32.053471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.282 06:58:32 -- accel/accel.sh@21 -- # val= 00:05:48.282 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.282 06:58:32 -- accel/accel.sh@21 -- # val= 00:05:48.282 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.282 06:58:32 -- accel/accel.sh@21 -- # val=0x1 00:05:48.282 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.282 06:58:32 -- accel/accel.sh@21 -- # val= 00:05:48.282 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.282 06:58:32 -- accel/accel.sh@21 -- # val= 00:05:48.282 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.282 06:58:32 -- accel/accel.sh@21 -- # val=compare 00:05:48.282 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.282 06:58:32 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.282 06:58:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:48.282 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.282 06:58:32 -- accel/accel.sh@21 -- # val= 00:05:48.282 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.282 06:58:32 -- accel/accel.sh@21 -- # val=software 00:05:48.282 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.282 06:58:32 -- accel/accel.sh@23 -- # accel_module=software 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.282 06:58:32 -- accel/accel.sh@21 -- # val=32 00:05:48.282 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.282 06:58:32 -- accel/accel.sh@21 -- # val=32 00:05:48.282 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.282 06:58:32 -- accel/accel.sh@21 -- # val=1 00:05:48.282 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.282 06:58:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:48.282 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.282 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.283 06:58:32 -- accel/accel.sh@21 -- # val=Yes 00:05:48.283 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.283 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.283 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.283 06:58:32 -- accel/accel.sh@21 -- # val= 00:05:48.283 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.283 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.283 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:48.283 06:58:32 -- accel/accel.sh@21 -- # val= 00:05:48.283 06:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.283 06:58:32 -- accel/accel.sh@20 -- # IFS=: 00:05:48.283 06:58:32 -- accel/accel.sh@20 -- # read -r var val 00:05:49.662 06:58:33 -- accel/accel.sh@21 -- # val= 00:05:49.662 06:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.662 06:58:33 -- accel/accel.sh@20 -- # IFS=: 00:05:49.662 06:58:33 -- accel/accel.sh@20 -- # read -r var val 00:05:49.662 06:58:33 -- accel/accel.sh@21 -- # val= 00:05:49.662 06:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.662 06:58:33 -- accel/accel.sh@20 -- # IFS=: 00:05:49.662 06:58:33 -- accel/accel.sh@20 -- # read -r var val 00:05:49.662 06:58:33 -- accel/accel.sh@21 -- # val= 00:05:49.662 06:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.662 06:58:33 -- accel/accel.sh@20 -- # IFS=: 00:05:49.662 06:58:33 -- accel/accel.sh@20 -- # read -r var val 00:05:49.662 06:58:33 -- accel/accel.sh@21 -- # val= 00:05:49.662 06:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.662 06:58:33 -- accel/accel.sh@20 -- # IFS=: 00:05:49.662 06:58:33 -- accel/accel.sh@20 -- # read -r var val 00:05:49.662 06:58:33 -- accel/accel.sh@21 -- # val= 00:05:49.662 06:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.662 06:58:33 -- accel/accel.sh@20 -- # IFS=: 00:05:49.662 06:58:33 -- accel/accel.sh@20 -- # read -r var val 00:05:49.662 06:58:33 -- accel/accel.sh@21 -- # val= 00:05:49.662 06:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.662 06:58:33 -- accel/accel.sh@20 -- # IFS=: 00:05:49.662 06:58:33 -- accel/accel.sh@20 -- # read -r var val 00:05:49.662 06:58:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:49.662 06:58:33 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:49.662 06:58:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.662 00:05:49.662 real 0m2.970s 00:05:49.662 user 0m2.542s 00:05:49.662 sys 0m0.226s 00:05:49.662 06:58:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.662 ************************************ 00:05:49.662 END TEST accel_compare 00:05:49.662 ************************************ 00:05:49.662 06:58:33 -- common/autotest_common.sh@10 -- # set +x 00:05:49.662 06:58:33 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:49.662 06:58:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:49.662 06:58:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.662 06:58:33 -- common/autotest_common.sh@10 -- # set +x 00:05:49.662 ************************************ 00:05:49.662 START TEST accel_xor 00:05:49.662 ************************************ 00:05:49.662 06:58:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:05:49.662 06:58:33 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.662 06:58:33 -- accel/accel.sh@17 -- # local accel_module 00:05:49.662 06:58:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:49.662 06:58:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:49.662 06:58:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.662 06:58:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.662 06:58:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.662 06:58:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.662 06:58:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.662 06:58:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.662 06:58:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.662 06:58:33 -- accel/accel.sh@42 -- # jq -r . 00:05:49.662 [2024-07-11 06:58:33.373007] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:49.662 [2024-07-11 06:58:33.373096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58789 ] 00:05:49.662 [2024-07-11 06:58:33.512691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.662 [2024-07-11 06:58:33.623800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.037 06:58:34 -- accel/accel.sh@18 -- # out=' 00:05:51.037 SPDK Configuration: 00:05:51.037 Core mask: 0x1 00:05:51.037 00:05:51.037 Accel Perf Configuration: 00:05:51.037 Workload Type: xor 00:05:51.037 Source buffers: 2 00:05:51.037 Transfer size: 4096 bytes 00:05:51.037 Vector count 1 00:05:51.037 Module: software 00:05:51.037 Queue depth: 32 00:05:51.037 Allocate depth: 32 00:05:51.037 # threads/core: 1 00:05:51.037 Run time: 1 seconds 00:05:51.037 Verify: Yes 00:05:51.037 00:05:51.037 Running for 1 seconds... 00:05:51.037 00:05:51.037 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:51.037 ------------------------------------------------------------------------------------ 00:05:51.037 0,0 269216/s 1051 MiB/s 0 0 00:05:51.037 ==================================================================================== 00:05:51.037 Total 269216/s 1051 MiB/s 0 0' 00:05:51.037 06:58:34 -- accel/accel.sh@20 -- # IFS=: 00:05:51.037 06:58:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:51.037 06:58:34 -- accel/accel.sh@20 -- # read -r var val 00:05:51.037 06:58:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:51.037 06:58:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.037 06:58:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.037 06:58:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.037 06:58:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.037 06:58:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.037 06:58:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.037 06:58:34 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.037 06:58:34 -- accel/accel.sh@42 -- # jq -r . 00:05:51.037 [2024-07-11 06:58:34.884362] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:51.037 [2024-07-11 06:58:34.884486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58803 ] 00:05:51.037 [2024-07-11 06:58:35.016475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.296 [2024-07-11 06:58:35.107132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val= 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val= 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val=0x1 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val= 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val= 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val=xor 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val=2 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val= 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val=software 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@23 -- # accel_module=software 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val=32 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val=32 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val=1 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val=Yes 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val= 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:51.296 06:58:35 -- accel/accel.sh@21 -- # val= 00:05:51.296 06:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # IFS=: 00:05:51.296 06:58:35 -- accel/accel.sh@20 -- # read -r var val 00:05:52.668 06:58:36 -- accel/accel.sh@21 -- # val= 00:05:52.668 06:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.668 06:58:36 -- accel/accel.sh@20 -- # IFS=: 00:05:52.668 06:58:36 -- accel/accel.sh@20 -- # read -r var val 00:05:52.668 06:58:36 -- accel/accel.sh@21 -- # val= 00:05:52.668 06:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.668 06:58:36 -- accel/accel.sh@20 -- # IFS=: 00:05:52.668 06:58:36 -- accel/accel.sh@20 -- # read -r var val 00:05:52.668 06:58:36 -- accel/accel.sh@21 -- # val= 00:05:52.668 06:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.668 06:58:36 -- accel/accel.sh@20 -- # IFS=: 00:05:52.668 06:58:36 -- accel/accel.sh@20 -- # read -r var val 00:05:52.668 06:58:36 -- accel/accel.sh@21 -- # val= 00:05:52.668 06:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.669 06:58:36 -- accel/accel.sh@20 -- # IFS=: 00:05:52.669 06:58:36 -- accel/accel.sh@20 -- # read -r var val 00:05:52.669 06:58:36 -- accel/accel.sh@21 -- # val= 00:05:52.669 06:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.669 06:58:36 -- accel/accel.sh@20 -- # IFS=: 00:05:52.669 06:58:36 -- accel/accel.sh@20 -- # read -r var val 00:05:52.669 ************************************ 00:05:52.669 END TEST accel_xor 00:05:52.669 ************************************ 00:05:52.669 06:58:36 -- accel/accel.sh@21 -- # val= 00:05:52.669 06:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.669 06:58:36 -- accel/accel.sh@20 -- # IFS=: 00:05:52.669 06:58:36 -- accel/accel.sh@20 -- # read -r var val 00:05:52.669 06:58:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.669 06:58:36 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:52.669 06:58:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.669 00:05:52.669 real 0m3.002s 00:05:52.669 user 0m2.576s 00:05:52.669 sys 0m0.224s 00:05:52.669 06:58:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.669 06:58:36 -- common/autotest_common.sh@10 -- # set +x 00:05:52.669 06:58:36 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:52.669 06:58:36 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:52.669 06:58:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.669 06:58:36 -- common/autotest_common.sh@10 -- # set +x 00:05:52.669 ************************************ 00:05:52.669 START TEST accel_xor 00:05:52.669 ************************************ 00:05:52.669 06:58:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:05:52.669 06:58:36 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.669 06:58:36 -- accel/accel.sh@17 -- # local accel_module 00:05:52.669 06:58:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:52.669 06:58:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:52.669 06:58:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.669 06:58:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.669 06:58:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.669 06:58:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.669 06:58:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.669 06:58:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.669 06:58:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.669 06:58:36 -- accel/accel.sh@42 -- # jq -r . 00:05:52.669 [2024-07-11 06:58:36.429878] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:52.669 [2024-07-11 06:58:36.429985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58842 ] 00:05:52.669 [2024-07-11 06:58:36.568287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.669 [2024-07-11 06:58:36.664942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.040 06:58:37 -- accel/accel.sh@18 -- # out=' 00:05:54.040 SPDK Configuration: 00:05:54.040 Core mask: 0x1 00:05:54.040 00:05:54.040 Accel Perf Configuration: 00:05:54.040 Workload Type: xor 00:05:54.040 Source buffers: 3 00:05:54.040 Transfer size: 4096 bytes 00:05:54.040 Vector count 1 00:05:54.040 Module: software 00:05:54.040 Queue depth: 32 00:05:54.040 Allocate depth: 32 00:05:54.040 # threads/core: 1 00:05:54.040 Run time: 1 seconds 00:05:54.040 Verify: Yes 00:05:54.040 00:05:54.040 Running for 1 seconds... 00:05:54.040 00:05:54.040 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:54.040 ------------------------------------------------------------------------------------ 00:05:54.040 0,0 244416/s 954 MiB/s 0 0 00:05:54.040 ==================================================================================== 00:05:54.040 Total 244416/s 954 MiB/s 0 0' 00:05:54.040 06:58:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:54.040 06:58:37 -- accel/accel.sh@20 -- # IFS=: 00:05:54.040 06:58:37 -- accel/accel.sh@20 -- # read -r var val 00:05:54.040 06:58:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:54.040 06:58:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.040 06:58:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.040 06:58:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.040 06:58:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.040 06:58:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.040 06:58:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.040 06:58:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.040 06:58:37 -- accel/accel.sh@42 -- # jq -r . 00:05:54.040 [2024-07-11 06:58:37.918787] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:54.040 [2024-07-11 06:58:37.918889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58857 ] 00:05:54.040 [2024-07-11 06:58:38.059703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.298 [2024-07-11 06:58:38.155826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val= 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val= 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val=0x1 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val= 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val= 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val=xor 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val=3 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val= 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val=software 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@23 -- # accel_module=software 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val=32 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val=32 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val=1 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.298 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.298 06:58:38 -- accel/accel.sh@21 -- # val=Yes 00:05:54.298 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.299 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.299 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.299 06:58:38 -- accel/accel.sh@21 -- # val= 00:05:54.299 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.299 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.299 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:54.299 06:58:38 -- accel/accel.sh@21 -- # val= 00:05:54.299 06:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.299 06:58:38 -- accel/accel.sh@20 -- # IFS=: 00:05:54.299 06:58:38 -- accel/accel.sh@20 -- # read -r var val 00:05:55.672 06:58:39 -- accel/accel.sh@21 -- # val= 00:05:55.672 06:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.672 06:58:39 -- accel/accel.sh@20 -- # IFS=: 00:05:55.672 06:58:39 -- accel/accel.sh@20 -- # read -r var val 00:05:55.672 06:58:39 -- accel/accel.sh@21 -- # val= 00:05:55.672 06:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.672 06:58:39 -- accel/accel.sh@20 -- # IFS=: 00:05:55.672 06:58:39 -- accel/accel.sh@20 -- # read -r var val 00:05:55.672 06:58:39 -- accel/accel.sh@21 -- # val= 00:05:55.672 06:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.672 06:58:39 -- accel/accel.sh@20 -- # IFS=: 00:05:55.672 06:58:39 -- accel/accel.sh@20 -- # read -r var val 00:05:55.672 06:58:39 -- accel/accel.sh@21 -- # val= 00:05:55.672 06:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.672 06:58:39 -- accel/accel.sh@20 -- # IFS=: 00:05:55.672 06:58:39 -- accel/accel.sh@20 -- # read -r var val 00:05:55.672 06:58:39 -- accel/accel.sh@21 -- # val= 00:05:55.672 06:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.672 06:58:39 -- accel/accel.sh@20 -- # IFS=: 00:05:55.672 06:58:39 -- accel/accel.sh@20 -- # read -r var val 00:05:55.672 06:58:39 -- accel/accel.sh@21 -- # val= 00:05:55.672 ************************************ 00:05:55.672 END TEST accel_xor 00:05:55.672 ************************************ 00:05:55.672 06:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.672 06:58:39 -- accel/accel.sh@20 -- # IFS=: 00:05:55.672 06:58:39 -- accel/accel.sh@20 -- # read -r var val 00:05:55.672 06:58:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:55.672 06:58:39 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:55.672 06:58:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.672 00:05:55.672 real 0m2.993s 00:05:55.672 user 0m2.567s 00:05:55.672 sys 0m0.220s 00:05:55.672 06:58:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.672 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:05:55.672 06:58:39 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:55.672 06:58:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:55.672 06:58:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.672 06:58:39 -- common/autotest_common.sh@10 -- # set +x 00:05:55.672 ************************************ 00:05:55.672 START TEST accel_dif_verify 00:05:55.672 ************************************ 00:05:55.672 06:58:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:05:55.672 06:58:39 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.672 06:58:39 -- accel/accel.sh@17 -- # local accel_module 00:05:55.672 06:58:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:55.672 06:58:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:55.672 06:58:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.672 06:58:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.672 06:58:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.672 06:58:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.672 06:58:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.672 06:58:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.672 06:58:39 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.672 06:58:39 -- accel/accel.sh@42 -- # jq -r . 00:05:55.672 [2024-07-11 06:58:39.470697] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:55.672 [2024-07-11 06:58:39.470789] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58892 ] 00:05:55.672 [2024-07-11 06:58:39.604548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.672 [2024-07-11 06:58:39.702962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.045 06:58:40 -- accel/accel.sh@18 -- # out=' 00:05:57.045 SPDK Configuration: 00:05:57.045 Core mask: 0x1 00:05:57.045 00:05:57.045 Accel Perf Configuration: 00:05:57.045 Workload Type: dif_verify 00:05:57.045 Vector size: 4096 bytes 00:05:57.045 Transfer size: 4096 bytes 00:05:57.045 Block size: 512 bytes 00:05:57.045 Metadata size: 8 bytes 00:05:57.045 Vector count 1 00:05:57.045 Module: software 00:05:57.045 Queue depth: 32 00:05:57.045 Allocate depth: 32 00:05:57.045 # threads/core: 1 00:05:57.045 Run time: 1 seconds 00:05:57.045 Verify: No 00:05:57.045 00:05:57.045 Running for 1 seconds... 00:05:57.045 00:05:57.045 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:57.045 ------------------------------------------------------------------------------------ 00:05:57.045 0,0 98880/s 392 MiB/s 0 0 00:05:57.045 ==================================================================================== 00:05:57.046 Total 98880/s 386 MiB/s 0 0' 00:05:57.046 06:58:40 -- accel/accel.sh@20 -- # IFS=: 00:05:57.046 06:58:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:57.046 06:58:40 -- accel/accel.sh@20 -- # read -r var val 00:05:57.046 06:58:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:57.046 06:58:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.046 06:58:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.046 06:58:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.046 06:58:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.046 06:58:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.046 06:58:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.046 06:58:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.046 06:58:40 -- accel/accel.sh@42 -- # jq -r . 00:05:57.046 [2024-07-11 06:58:40.964168] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:57.046 [2024-07-11 06:58:40.964268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58911 ] 00:05:57.046 [2024-07-11 06:58:41.098200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.305 [2024-07-11 06:58:41.199354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val= 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val= 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val=0x1 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val= 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val= 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val=dif_verify 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val= 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val=software 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@23 -- # accel_module=software 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val=32 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val=32 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val=1 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val=No 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val= 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:57.305 06:58:41 -- accel/accel.sh@21 -- # val= 00:05:57.305 06:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # IFS=: 00:05:57.305 06:58:41 -- accel/accel.sh@20 -- # read -r var val 00:05:58.681 06:58:42 -- accel/accel.sh@21 -- # val= 00:05:58.681 06:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.681 06:58:42 -- accel/accel.sh@20 -- # IFS=: 00:05:58.681 06:58:42 -- accel/accel.sh@20 -- # read -r var val 00:05:58.681 06:58:42 -- accel/accel.sh@21 -- # val= 00:05:58.681 06:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.681 06:58:42 -- accel/accel.sh@20 -- # IFS=: 00:05:58.681 06:58:42 -- accel/accel.sh@20 -- # read -r var val 00:05:58.681 06:58:42 -- accel/accel.sh@21 -- # val= 00:05:58.681 06:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.681 06:58:42 -- accel/accel.sh@20 -- # IFS=: 00:05:58.681 06:58:42 -- accel/accel.sh@20 -- # read -r var val 00:05:58.681 06:58:42 -- accel/accel.sh@21 -- # val= 00:05:58.681 06:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.681 06:58:42 -- accel/accel.sh@20 -- # IFS=: 00:05:58.681 06:58:42 -- accel/accel.sh@20 -- # read -r var val 00:05:58.681 06:58:42 -- accel/accel.sh@21 -- # val= 00:05:58.681 06:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.681 06:58:42 -- accel/accel.sh@20 -- # IFS=: 00:05:58.681 06:58:42 -- accel/accel.sh@20 -- # read -r var val 00:05:58.681 06:58:42 -- accel/accel.sh@21 -- # val= 00:05:58.681 06:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.681 06:58:42 -- accel/accel.sh@20 -- # IFS=: 00:05:58.681 06:58:42 -- accel/accel.sh@20 -- # read -r var val 00:05:58.681 06:58:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:58.681 06:58:42 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:58.682 06:58:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.682 00:05:58.682 real 0m3.004s 00:05:58.682 user 0m2.582s 00:05:58.682 sys 0m0.222s 00:05:58.682 06:58:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.682 ************************************ 00:05:58.682 END TEST accel_dif_verify 00:05:58.682 ************************************ 00:05:58.682 06:58:42 -- common/autotest_common.sh@10 -- # set +x 00:05:58.682 06:58:42 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:58.682 06:58:42 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:58.682 06:58:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.682 06:58:42 -- common/autotest_common.sh@10 -- # set +x 00:05:58.682 ************************************ 00:05:58.682 START TEST accel_dif_generate 00:05:58.682 ************************************ 00:05:58.682 06:58:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:05:58.682 06:58:42 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.682 06:58:42 -- accel/accel.sh@17 -- # local accel_module 00:05:58.682 06:58:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:58.682 06:58:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:58.682 06:58:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.682 06:58:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.682 06:58:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.682 06:58:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.682 06:58:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.682 06:58:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.682 06:58:42 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.682 06:58:42 -- accel/accel.sh@42 -- # jq -r . 00:05:58.682 [2024-07-11 06:58:42.524037] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:58.682 [2024-07-11 06:58:42.524112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58946 ] 00:05:58.682 [2024-07-11 06:58:42.654158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.940 [2024-07-11 06:58:42.740936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.316 06:58:43 -- accel/accel.sh@18 -- # out=' 00:06:00.316 SPDK Configuration: 00:06:00.316 Core mask: 0x1 00:06:00.316 00:06:00.316 Accel Perf Configuration: 00:06:00.316 Workload Type: dif_generate 00:06:00.316 Vector size: 4096 bytes 00:06:00.316 Transfer size: 4096 bytes 00:06:00.316 Block size: 512 bytes 00:06:00.316 Metadata size: 8 bytes 00:06:00.316 Vector count 1 00:06:00.316 Module: software 00:06:00.316 Queue depth: 32 00:06:00.316 Allocate depth: 32 00:06:00.316 # threads/core: 1 00:06:00.316 Run time: 1 seconds 00:06:00.316 Verify: No 00:06:00.316 00:06:00.316 Running for 1 seconds... 00:06:00.316 00:06:00.316 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:00.316 ------------------------------------------------------------------------------------ 00:06:00.316 0,0 121248/s 481 MiB/s 0 0 00:06:00.316 ==================================================================================== 00:06:00.316 Total 121248/s 473 MiB/s 0 0' 00:06:00.316 06:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:00.316 06:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:00.316 06:58:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:00.316 06:58:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:00.316 06:58:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.316 06:58:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.316 06:58:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.316 06:58:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.316 06:58:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.316 06:58:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.316 06:58:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.316 06:58:43 -- accel/accel.sh@42 -- # jq -r . 00:06:00.316 [2024-07-11 06:58:44.006963] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:00.316 [2024-07-11 06:58:44.007042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58965 ] 00:06:00.316 [2024-07-11 06:58:44.143937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.316 [2024-07-11 06:58:44.223980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val= 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val= 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val=0x1 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val= 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val= 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val=dif_generate 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val= 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val=software 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val=32 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val=32 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val=1 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val=No 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val= 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:00.317 06:58:44 -- accel/accel.sh@21 -- # val= 00:06:00.317 06:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:00.317 06:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:01.692 06:58:45 -- accel/accel.sh@21 -- # val= 00:06:01.692 06:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.692 06:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:01.692 06:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:01.692 06:58:45 -- accel/accel.sh@21 -- # val= 00:06:01.692 06:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.692 06:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:01.692 06:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:01.692 06:58:45 -- accel/accel.sh@21 -- # val= 00:06:01.692 06:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.692 06:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:01.693 06:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:01.693 06:58:45 -- accel/accel.sh@21 -- # val= 00:06:01.693 06:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.693 06:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:01.693 06:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:01.693 06:58:45 -- accel/accel.sh@21 -- # val= 00:06:01.693 06:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.693 06:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:01.693 06:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:01.693 06:58:45 -- accel/accel.sh@21 -- # val= 00:06:01.693 06:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.693 06:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:01.693 06:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:01.693 06:58:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:01.693 06:58:45 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:01.693 06:58:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.693 00:06:01.693 real 0m2.976s 00:06:01.693 user 0m2.552s 00:06:01.693 sys 0m0.223s 00:06:01.693 06:58:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.693 06:58:45 -- common/autotest_common.sh@10 -- # set +x 00:06:01.693 ************************************ 00:06:01.693 END TEST accel_dif_generate 00:06:01.693 ************************************ 00:06:01.693 06:58:45 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:01.693 06:58:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:01.693 06:58:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.693 06:58:45 -- common/autotest_common.sh@10 -- # set +x 00:06:01.693 ************************************ 00:06:01.693 START TEST accel_dif_generate_copy 00:06:01.693 ************************************ 00:06:01.693 06:58:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:01.693 06:58:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.693 06:58:45 -- accel/accel.sh@17 -- # local accel_module 00:06:01.693 06:58:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:01.693 06:58:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:01.693 06:58:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.693 06:58:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.693 06:58:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.693 06:58:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.693 06:58:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.693 06:58:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.693 06:58:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.693 06:58:45 -- accel/accel.sh@42 -- # jq -r . 00:06:01.693 [2024-07-11 06:58:45.552017] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:01.693 [2024-07-11 06:58:45.552097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59000 ] 00:06:01.693 [2024-07-11 06:58:45.683331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.951 [2024-07-11 06:58:45.764390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.327 06:58:47 -- accel/accel.sh@18 -- # out=' 00:06:03.327 SPDK Configuration: 00:06:03.327 Core mask: 0x1 00:06:03.327 00:06:03.327 Accel Perf Configuration: 00:06:03.327 Workload Type: dif_generate_copy 00:06:03.327 Vector size: 4096 bytes 00:06:03.327 Transfer size: 4096 bytes 00:06:03.327 Vector count 1 00:06:03.327 Module: software 00:06:03.327 Queue depth: 32 00:06:03.327 Allocate depth: 32 00:06:03.327 # threads/core: 1 00:06:03.327 Run time: 1 seconds 00:06:03.327 Verify: No 00:06:03.327 00:06:03.327 Running for 1 seconds... 00:06:03.327 00:06:03.327 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:03.327 ------------------------------------------------------------------------------------ 00:06:03.327 0,0 93952/s 372 MiB/s 0 0 00:06:03.327 ==================================================================================== 00:06:03.327 Total 93952/s 367 MiB/s 0 0' 00:06:03.327 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.327 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.327 06:58:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:03.327 06:58:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:03.327 06:58:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.327 06:58:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.327 06:58:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.327 06:58:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.327 06:58:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.327 06:58:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.327 06:58:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.328 06:58:47 -- accel/accel.sh@42 -- # jq -r . 00:06:03.328 [2024-07-11 06:58:47.034856] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:03.328 [2024-07-11 06:58:47.034949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59019 ] 00:06:03.328 [2024-07-11 06:58:47.171349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.328 [2024-07-11 06:58:47.248671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val= 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val= 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val=0x1 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val= 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val= 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val= 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val=software 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val=32 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val=32 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val=1 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val=No 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val= 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:03.328 06:58:47 -- accel/accel.sh@21 -- # val= 00:06:03.328 06:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:03.328 06:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:04.703 06:58:48 -- accel/accel.sh@21 -- # val= 00:06:04.703 06:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.703 06:58:48 -- accel/accel.sh@20 -- # IFS=: 00:06:04.703 06:58:48 -- accel/accel.sh@20 -- # read -r var val 00:06:04.703 06:58:48 -- accel/accel.sh@21 -- # val= 00:06:04.703 06:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.703 06:58:48 -- accel/accel.sh@20 -- # IFS=: 00:06:04.703 06:58:48 -- accel/accel.sh@20 -- # read -r var val 00:06:04.703 06:58:48 -- accel/accel.sh@21 -- # val= 00:06:04.703 06:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.703 06:58:48 -- accel/accel.sh@20 -- # IFS=: 00:06:04.703 06:58:48 -- accel/accel.sh@20 -- # read -r var val 00:06:04.703 06:58:48 -- accel/accel.sh@21 -- # val= 00:06:04.703 06:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.703 06:58:48 -- accel/accel.sh@20 -- # IFS=: 00:06:04.703 06:58:48 -- accel/accel.sh@20 -- # read -r var val 00:06:04.703 06:58:48 -- accel/accel.sh@21 -- # val= 00:06:04.703 06:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.703 06:58:48 -- accel/accel.sh@20 -- # IFS=: 00:06:04.703 06:58:48 -- accel/accel.sh@20 -- # read -r var val 00:06:04.703 06:58:48 -- accel/accel.sh@21 -- # val= 00:06:04.703 06:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.703 06:58:48 -- accel/accel.sh@20 -- # IFS=: 00:06:04.703 06:58:48 -- accel/accel.sh@20 -- # read -r var val 00:06:04.703 06:58:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:04.703 06:58:48 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:04.703 06:58:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.703 00:06:04.703 real 0m2.966s 00:06:04.703 user 0m2.539s 00:06:04.703 sys 0m0.227s 00:06:04.703 06:58:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.703 06:58:48 -- common/autotest_common.sh@10 -- # set +x 00:06:04.703 ************************************ 00:06:04.703 END TEST accel_dif_generate_copy 00:06:04.703 ************************************ 00:06:04.703 06:58:48 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:04.703 06:58:48 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.703 06:58:48 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:04.703 06:58:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.703 06:58:48 -- common/autotest_common.sh@10 -- # set +x 00:06:04.703 ************************************ 00:06:04.703 START TEST accel_comp 00:06:04.703 ************************************ 00:06:04.703 06:58:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.703 06:58:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.703 06:58:48 -- accel/accel.sh@17 -- # local accel_module 00:06:04.703 06:58:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.703 06:58:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.703 06:58:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.703 06:58:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.703 06:58:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.703 06:58:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.703 06:58:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.703 06:58:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.703 06:58:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.703 06:58:48 -- accel/accel.sh@42 -- # jq -r . 00:06:04.703 [2024-07-11 06:58:48.569147] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:04.703 [2024-07-11 06:58:48.569291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59054 ] 00:06:04.703 [2024-07-11 06:58:48.700797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.961 [2024-07-11 06:58:48.781328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.338 06:58:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:06.338 00:06:06.338 SPDK Configuration: 00:06:06.338 Core mask: 0x1 00:06:06.338 00:06:06.338 Accel Perf Configuration: 00:06:06.338 Workload Type: compress 00:06:06.338 Transfer size: 4096 bytes 00:06:06.338 Vector count 1 00:06:06.338 Module: software 00:06:06.338 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:06.338 Queue depth: 32 00:06:06.338 Allocate depth: 32 00:06:06.338 # threads/core: 1 00:06:06.338 Run time: 1 seconds 00:06:06.338 Verify: No 00:06:06.338 00:06:06.338 Running for 1 seconds... 00:06:06.338 00:06:06.338 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:06.338 ------------------------------------------------------------------------------------ 00:06:06.338 0,0 48896/s 203 MiB/s 0 0 00:06:06.338 ==================================================================================== 00:06:06.338 Total 48896/s 191 MiB/s 0 0' 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:06.338 06:58:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:06.338 06:58:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.338 06:58:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.338 06:58:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.338 06:58:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.338 06:58:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.338 06:58:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.338 06:58:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.338 06:58:50 -- accel/accel.sh@42 -- # jq -r . 00:06:06.338 [2024-07-11 06:58:50.052661] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:06.338 [2024-07-11 06:58:50.052743] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59072 ] 00:06:06.338 [2024-07-11 06:58:50.191515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.338 [2024-07-11 06:58:50.269176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val= 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val= 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val= 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val=0x1 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val= 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val= 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val=compress 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val= 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val=software 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val=32 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val=32 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val=1 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val=No 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val= 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:06.338 06:58:50 -- accel/accel.sh@21 -- # val= 00:06:06.338 06:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:06.338 06:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:07.714 06:58:51 -- accel/accel.sh@21 -- # val= 00:06:07.714 06:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.714 06:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:07.714 06:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:07.714 06:58:51 -- accel/accel.sh@21 -- # val= 00:06:07.714 06:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.714 06:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:07.714 06:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:07.714 06:58:51 -- accel/accel.sh@21 -- # val= 00:06:07.714 06:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.714 06:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:07.714 06:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:07.714 06:58:51 -- accel/accel.sh@21 -- # val= 00:06:07.714 06:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.714 06:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:07.714 06:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:07.714 06:58:51 -- accel/accel.sh@21 -- # val= 00:06:07.714 06:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.714 06:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:07.714 06:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:07.714 06:58:51 -- accel/accel.sh@21 -- # val= 00:06:07.714 06:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.714 06:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:07.714 06:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:07.714 06:58:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:07.714 06:58:51 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:07.714 06:58:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.714 00:06:07.714 real 0m2.956s 00:06:07.714 user 0m1.274s 00:06:07.714 sys 0m0.111s 00:06:07.714 06:58:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.714 ************************************ 00:06:07.714 END TEST accel_comp 00:06:07.714 ************************************ 00:06:07.714 06:58:51 -- common/autotest_common.sh@10 -- # set +x 00:06:07.714 06:58:51 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:07.714 06:58:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:07.714 06:58:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.714 06:58:51 -- common/autotest_common.sh@10 -- # set +x 00:06:07.714 ************************************ 00:06:07.714 START TEST accel_decomp 00:06:07.714 ************************************ 00:06:07.714 06:58:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:07.714 06:58:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.714 06:58:51 -- accel/accel.sh@17 -- # local accel_module 00:06:07.714 06:58:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:07.714 06:58:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:07.714 06:58:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.714 06:58:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.714 06:58:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.714 06:58:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.714 06:58:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.714 06:58:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.714 06:58:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.714 06:58:51 -- accel/accel.sh@42 -- # jq -r . 00:06:07.714 [2024-07-11 06:58:51.571693] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:07.714 [2024-07-11 06:58:51.571791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59104 ] 00:06:07.714 [2024-07-11 06:58:51.707729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.973 [2024-07-11 06:58:51.784878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.348 06:58:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:09.348 00:06:09.348 SPDK Configuration: 00:06:09.348 Core mask: 0x1 00:06:09.348 00:06:09.348 Accel Perf Configuration: 00:06:09.348 Workload Type: decompress 00:06:09.348 Transfer size: 4096 bytes 00:06:09.348 Vector count 1 00:06:09.348 Module: software 00:06:09.348 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:09.348 Queue depth: 32 00:06:09.348 Allocate depth: 32 00:06:09.348 # threads/core: 1 00:06:09.348 Run time: 1 seconds 00:06:09.348 Verify: Yes 00:06:09.348 00:06:09.348 Running for 1 seconds... 00:06:09.348 00:06:09.348 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:09.348 ------------------------------------------------------------------------------------ 00:06:09.348 0,0 71264/s 131 MiB/s 0 0 00:06:09.348 ==================================================================================== 00:06:09.348 Total 71264/s 278 MiB/s 0 0' 00:06:09.348 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.348 06:58:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:09.348 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.348 06:58:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:09.348 06:58:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.348 06:58:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.348 06:58:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.348 06:58:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.348 06:58:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.348 06:58:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.348 06:58:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.348 06:58:53 -- accel/accel.sh@42 -- # jq -r . 00:06:09.348 [2024-07-11 06:58:53.043339] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:09.348 [2024-07-11 06:58:53.043426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59124 ] 00:06:09.348 [2024-07-11 06:58:53.181304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.348 [2024-07-11 06:58:53.259439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.348 06:58:53 -- accel/accel.sh@21 -- # val= 00:06:09.348 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.348 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.348 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.348 06:58:53 -- accel/accel.sh@21 -- # val= 00:06:09.348 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.348 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.348 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.348 06:58:53 -- accel/accel.sh@21 -- # val= 00:06:09.348 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.348 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.348 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.348 06:58:53 -- accel/accel.sh@21 -- # val=0x1 00:06:09.348 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.348 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.348 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.348 06:58:53 -- accel/accel.sh@21 -- # val= 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.349 06:58:53 -- accel/accel.sh@21 -- # val= 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.349 06:58:53 -- accel/accel.sh@21 -- # val=decompress 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.349 06:58:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.349 06:58:53 -- accel/accel.sh@21 -- # val= 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.349 06:58:53 -- accel/accel.sh@21 -- # val=software 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.349 06:58:53 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.349 06:58:53 -- accel/accel.sh@21 -- # val=32 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.349 06:58:53 -- accel/accel.sh@21 -- # val=32 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.349 06:58:53 -- accel/accel.sh@21 -- # val=1 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.349 06:58:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.349 06:58:53 -- accel/accel.sh@21 -- # val=Yes 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.349 06:58:53 -- accel/accel.sh@21 -- # val= 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:09.349 06:58:53 -- accel/accel.sh@21 -- # val= 00:06:09.349 06:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:09.349 06:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:10.726 06:58:54 -- accel/accel.sh@21 -- # val= 00:06:10.726 06:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.726 06:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:10.726 06:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:10.726 06:58:54 -- accel/accel.sh@21 -- # val= 00:06:10.726 06:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.726 06:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:10.726 06:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:10.726 06:58:54 -- accel/accel.sh@21 -- # val= 00:06:10.726 06:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.726 06:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:10.726 06:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:10.726 06:58:54 -- accel/accel.sh@21 -- # val= 00:06:10.726 06:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.727 06:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:10.727 06:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:10.727 06:58:54 -- accel/accel.sh@21 -- # val= 00:06:10.727 06:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.727 06:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:10.727 06:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:10.727 06:58:54 -- accel/accel.sh@21 -- # val= 00:06:10.727 06:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.727 06:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:10.727 06:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:10.727 06:58:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:10.727 06:58:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:10.727 06:58:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.727 00:06:10.727 real 0m2.953s 00:06:10.727 user 0m2.515s 00:06:10.727 sys 0m0.234s 00:06:10.727 06:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.727 06:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:10.727 ************************************ 00:06:10.727 END TEST accel_decomp 00:06:10.727 ************************************ 00:06:10.727 06:58:54 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:10.727 06:58:54 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:10.727 06:58:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.727 06:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:10.727 ************************************ 00:06:10.727 START TEST accel_decmop_full 00:06:10.727 ************************************ 00:06:10.727 06:58:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:10.727 06:58:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.727 06:58:54 -- accel/accel.sh@17 -- # local accel_module 00:06:10.727 06:58:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:10.727 06:58:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:10.727 06:58:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.727 06:58:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.727 06:58:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.727 06:58:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.727 06:58:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.727 06:58:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.727 06:58:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.727 06:58:54 -- accel/accel.sh@42 -- # jq -r . 00:06:10.727 [2024-07-11 06:58:54.587855] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:10.727 [2024-07-11 06:58:54.587942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59158 ] 00:06:10.727 [2024-07-11 06:58:54.721501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.986 [2024-07-11 06:58:54.799128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.363 06:58:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:12.363 00:06:12.363 SPDK Configuration: 00:06:12.363 Core mask: 0x1 00:06:12.363 00:06:12.363 Accel Perf Configuration: 00:06:12.363 Workload Type: decompress 00:06:12.363 Transfer size: 111250 bytes 00:06:12.363 Vector count 1 00:06:12.363 Module: software 00:06:12.363 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:12.363 Queue depth: 32 00:06:12.363 Allocate depth: 32 00:06:12.363 # threads/core: 1 00:06:12.363 Run time: 1 seconds 00:06:12.363 Verify: Yes 00:06:12.363 00:06:12.363 Running for 1 seconds... 00:06:12.363 00:06:12.363 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.363 ------------------------------------------------------------------------------------ 00:06:12.363 0,0 4800/s 198 MiB/s 0 0 00:06:12.363 ==================================================================================== 00:06:12.363 Total 4800/s 509 MiB/s 0 0' 00:06:12.363 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.363 06:58:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:12.363 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.363 06:58:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:12.363 06:58:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.363 06:58:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.363 06:58:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.363 06:58:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.363 06:58:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.363 06:58:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.364 06:58:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.364 06:58:56 -- accel/accel.sh@42 -- # jq -r . 00:06:12.364 [2024-07-11 06:58:56.071566] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:12.364 [2024-07-11 06:58:56.072250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59178 ] 00:06:12.364 [2024-07-11 06:58:56.210124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.364 [2024-07-11 06:58:56.284002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val= 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val= 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val= 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val=0x1 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val= 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val= 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val=decompress 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val= 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val=software 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val=32 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val=32 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val=1 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val=Yes 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val= 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:12.364 06:58:56 -- accel/accel.sh@21 -- # val= 00:06:12.364 06:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:12.364 06:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:13.741 06:58:57 -- accel/accel.sh@21 -- # val= 00:06:13.741 06:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.741 06:58:57 -- accel/accel.sh@20 -- # IFS=: 00:06:13.741 06:58:57 -- accel/accel.sh@20 -- # read -r var val 00:06:13.741 06:58:57 -- accel/accel.sh@21 -- # val= 00:06:13.741 06:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.741 06:58:57 -- accel/accel.sh@20 -- # IFS=: 00:06:13.741 06:58:57 -- accel/accel.sh@20 -- # read -r var val 00:06:13.741 06:58:57 -- accel/accel.sh@21 -- # val= 00:06:13.741 06:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.741 06:58:57 -- accel/accel.sh@20 -- # IFS=: 00:06:13.741 06:58:57 -- accel/accel.sh@20 -- # read -r var val 00:06:13.741 06:58:57 -- accel/accel.sh@21 -- # val= 00:06:13.741 06:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.741 06:58:57 -- accel/accel.sh@20 -- # IFS=: 00:06:13.741 06:58:57 -- accel/accel.sh@20 -- # read -r var val 00:06:13.741 06:58:57 -- accel/accel.sh@21 -- # val= 00:06:13.741 06:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.741 06:58:57 -- accel/accel.sh@20 -- # IFS=: 00:06:13.741 06:58:57 -- accel/accel.sh@20 -- # read -r var val 00:06:13.741 06:58:57 -- accel/accel.sh@21 -- # val= 00:06:13.742 06:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.742 06:58:57 -- accel/accel.sh@20 -- # IFS=: 00:06:13.742 06:58:57 -- accel/accel.sh@20 -- # read -r var val 00:06:13.742 06:58:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:13.742 06:58:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:13.742 ************************************ 00:06:13.742 END TEST accel_decmop_full 00:06:13.742 ************************************ 00:06:13.742 06:58:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.742 00:06:13.742 real 0m2.983s 00:06:13.742 user 0m2.554s 00:06:13.742 sys 0m0.219s 00:06:13.742 06:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.742 06:58:57 -- common/autotest_common.sh@10 -- # set +x 00:06:13.742 06:58:57 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:13.742 06:58:57 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:13.742 06:58:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.742 06:58:57 -- common/autotest_common.sh@10 -- # set +x 00:06:13.742 ************************************ 00:06:13.742 START TEST accel_decomp_mcore 00:06:13.742 ************************************ 00:06:13.742 06:58:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:13.742 06:58:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.742 06:58:57 -- accel/accel.sh@17 -- # local accel_module 00:06:13.742 06:58:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:13.742 06:58:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:13.742 06:58:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.742 06:58:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.742 06:58:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.742 06:58:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.742 06:58:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.742 06:58:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.742 06:58:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.742 06:58:57 -- accel/accel.sh@42 -- # jq -r . 00:06:13.742 [2024-07-11 06:58:57.618265] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:13.742 [2024-07-11 06:58:57.618344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59212 ] 00:06:13.742 [2024-07-11 06:58:57.752870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.001 [2024-07-11 06:58:57.833286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.001 [2024-07-11 06:58:57.833421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.001 [2024-07-11 06:58:57.833562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.001 [2024-07-11 06:58:57.834826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.377 06:58:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:15.377 00:06:15.377 SPDK Configuration: 00:06:15.377 Core mask: 0xf 00:06:15.377 00:06:15.378 Accel Perf Configuration: 00:06:15.378 Workload Type: decompress 00:06:15.378 Transfer size: 4096 bytes 00:06:15.378 Vector count 1 00:06:15.378 Module: software 00:06:15.378 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:15.378 Queue depth: 32 00:06:15.378 Allocate depth: 32 00:06:15.378 # threads/core: 1 00:06:15.378 Run time: 1 seconds 00:06:15.378 Verify: Yes 00:06:15.378 00:06:15.378 Running for 1 seconds... 00:06:15.378 00:06:15.378 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.378 ------------------------------------------------------------------------------------ 00:06:15.378 0,0 57408/s 105 MiB/s 0 0 00:06:15.378 3,0 54112/s 99 MiB/s 0 0 00:06:15.378 2,0 53440/s 98 MiB/s 0 0 00:06:15.378 1,0 52480/s 96 MiB/s 0 0 00:06:15.378 ==================================================================================== 00:06:15.378 Total 217440/s 849 MiB/s 0 0' 00:06:15.378 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.378 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.378 06:58:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:15.378 06:58:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:15.378 06:58:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.378 06:58:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.378 06:58:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.378 06:58:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.378 06:58:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.378 06:58:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.378 06:58:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.378 06:58:59 -- accel/accel.sh@42 -- # jq -r . 00:06:15.378 [2024-07-11 06:58:59.153642] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:15.378 [2024-07-11 06:58:59.153745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59237 ] 00:06:15.378 [2024-07-11 06:58:59.291140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.378 [2024-07-11 06:58:59.371133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.378 [2024-07-11 06:58:59.371280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.378 [2024-07-11 06:58:59.371412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.378 [2024-07-11 06:58:59.371756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val= 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val= 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val= 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val=0xf 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val= 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val= 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val=decompress 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val= 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val=software 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val=32 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val=32 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val=1 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val=Yes 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val= 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:15.637 06:58:59 -- accel/accel.sh@21 -- # val= 00:06:15.637 06:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:15.637 06:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:17.021 06:59:00 -- accel/accel.sh@21 -- # val= 00:06:17.021 06:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:17.021 06:59:00 -- accel/accel.sh@21 -- # val= 00:06:17.021 06:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:17.021 06:59:00 -- accel/accel.sh@21 -- # val= 00:06:17.021 06:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:17.021 06:59:00 -- accel/accel.sh@21 -- # val= 00:06:17.021 06:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:17.021 06:59:00 -- accel/accel.sh@21 -- # val= 00:06:17.021 06:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:17.021 06:59:00 -- accel/accel.sh@21 -- # val= 00:06:17.021 06:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:17.021 06:59:00 -- accel/accel.sh@21 -- # val= 00:06:17.021 06:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:17.021 06:59:00 -- accel/accel.sh@21 -- # val= 00:06:17.021 06:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:17.021 06:59:00 -- accel/accel.sh@21 -- # val= 00:06:17.021 06:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:17.021 06:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:17.021 06:59:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.021 ************************************ 00:06:17.021 END TEST accel_decomp_mcore 00:06:17.021 ************************************ 00:06:17.021 06:59:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:17.021 06:59:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.021 00:06:17.021 real 0m3.106s 00:06:17.021 user 0m9.678s 00:06:17.021 sys 0m0.270s 00:06:17.021 06:59:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.021 06:59:00 -- common/autotest_common.sh@10 -- # set +x 00:06:17.021 06:59:00 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:17.021 06:59:00 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:17.021 06:59:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.021 06:59:00 -- common/autotest_common.sh@10 -- # set +x 00:06:17.021 ************************************ 00:06:17.021 START TEST accel_decomp_full_mcore 00:06:17.021 ************************************ 00:06:17.021 06:59:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:17.021 06:59:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.021 06:59:00 -- accel/accel.sh@17 -- # local accel_module 00:06:17.021 06:59:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:17.021 06:59:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.021 06:59:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:17.021 06:59:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.021 06:59:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.021 06:59:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.021 06:59:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.021 06:59:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.021 06:59:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.021 06:59:00 -- accel/accel.sh@42 -- # jq -r . 00:06:17.021 [2024-07-11 06:59:00.773310] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:17.021 [2024-07-11 06:59:00.773389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59280 ] 00:06:17.021 [2024-07-11 06:59:00.903846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.021 [2024-07-11 06:59:00.999544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.021 [2024-07-11 06:59:00.999605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.021 [2024-07-11 06:59:00.999755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.021 [2024-07-11 06:59:00.999764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.485 06:59:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:18.485 00:06:18.485 SPDK Configuration: 00:06:18.485 Core mask: 0xf 00:06:18.485 00:06:18.485 Accel Perf Configuration: 00:06:18.485 Workload Type: decompress 00:06:18.485 Transfer size: 111250 bytes 00:06:18.485 Vector count 1 00:06:18.485 Module: software 00:06:18.485 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:18.485 Queue depth: 32 00:06:18.485 Allocate depth: 32 00:06:18.485 # threads/core: 1 00:06:18.485 Run time: 1 seconds 00:06:18.485 Verify: Yes 00:06:18.485 00:06:18.485 Running for 1 seconds... 00:06:18.485 00:06:18.485 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.485 ------------------------------------------------------------------------------------ 00:06:18.485 0,0 5472/s 226 MiB/s 0 0 00:06:18.485 3,0 4864/s 200 MiB/s 0 0 00:06:18.485 2,0 4640/s 191 MiB/s 0 0 00:06:18.485 1,0 4640/s 191 MiB/s 0 0 00:06:18.485 ==================================================================================== 00:06:18.485 Total 19616/s 2081 MiB/s 0 0' 00:06:18.485 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.485 06:59:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:18.485 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.485 06:59:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:18.485 06:59:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.485 06:59:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.485 06:59:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.485 06:59:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.485 06:59:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.485 06:59:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.486 06:59:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.486 06:59:02 -- accel/accel.sh@42 -- # jq -r . 00:06:18.486 [2024-07-11 06:59:02.366120] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:18.486 [2024-07-11 06:59:02.366223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:06:18.486 [2024-07-11 06:59:02.503297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.745 [2024-07-11 06:59:02.593286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.745 [2024-07-11 06:59:02.593432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.745 [2024-07-11 06:59:02.593545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.745 [2024-07-11 06:59:02.593830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val= 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val= 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val= 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val=0xf 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val= 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val= 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val=decompress 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val= 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val=software 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val=32 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val=32 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val=1 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val=Yes 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val= 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:18.745 06:59:02 -- accel/accel.sh@21 -- # val= 00:06:18.745 06:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:18.745 06:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:20.122 06:59:03 -- accel/accel.sh@21 -- # val= 00:06:20.122 06:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:20.122 06:59:03 -- accel/accel.sh@21 -- # val= 00:06:20.122 06:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:20.122 06:59:03 -- accel/accel.sh@21 -- # val= 00:06:20.122 06:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:20.122 06:59:03 -- accel/accel.sh@21 -- # val= 00:06:20.122 06:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:20.122 06:59:03 -- accel/accel.sh@21 -- # val= 00:06:20.122 06:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:20.122 06:59:03 -- accel/accel.sh@21 -- # val= 00:06:20.122 06:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:20.122 06:59:03 -- accel/accel.sh@21 -- # val= 00:06:20.122 06:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:20.122 06:59:03 -- accel/accel.sh@21 -- # val= 00:06:20.122 06:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:20.122 06:59:03 -- accel/accel.sh@21 -- # val= 00:06:20.122 06:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:20.122 06:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:20.122 06:59:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.122 06:59:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:20.122 06:59:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.122 00:06:20.122 real 0m3.200s 00:06:20.122 user 0m9.911s 00:06:20.122 sys 0m0.318s 00:06:20.122 06:59:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.122 ************************************ 00:06:20.122 END TEST accel_decomp_full_mcore 00:06:20.122 06:59:03 -- common/autotest_common.sh@10 -- # set +x 00:06:20.122 ************************************ 00:06:20.122 06:59:03 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:20.122 06:59:03 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:20.122 06:59:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.122 06:59:03 -- common/autotest_common.sh@10 -- # set +x 00:06:20.122 ************************************ 00:06:20.122 START TEST accel_decomp_mthread 00:06:20.122 ************************************ 00:06:20.122 06:59:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:20.122 06:59:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.122 06:59:04 -- accel/accel.sh@17 -- # local accel_module 00:06:20.122 06:59:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:20.122 06:59:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:20.122 06:59:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.122 06:59:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.122 06:59:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.122 06:59:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.122 06:59:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.122 06:59:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.122 06:59:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.122 06:59:04 -- accel/accel.sh@42 -- # jq -r . 00:06:20.122 [2024-07-11 06:59:04.032715] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:20.122 [2024-07-11 06:59:04.032829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59340 ] 00:06:20.122 [2024-07-11 06:59:04.161977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.381 [2024-07-11 06:59:04.254183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.755 06:59:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:21.755 00:06:21.755 SPDK Configuration: 00:06:21.755 Core mask: 0x1 00:06:21.755 00:06:21.755 Accel Perf Configuration: 00:06:21.755 Workload Type: decompress 00:06:21.755 Transfer size: 4096 bytes 00:06:21.755 Vector count 1 00:06:21.755 Module: software 00:06:21.755 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:21.755 Queue depth: 32 00:06:21.755 Allocate depth: 32 00:06:21.755 # threads/core: 2 00:06:21.755 Run time: 1 seconds 00:06:21.755 Verify: Yes 00:06:21.755 00:06:21.755 Running for 1 seconds... 00:06:21.755 00:06:21.755 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.755 ------------------------------------------------------------------------------------ 00:06:21.755 0,1 41600/s 76 MiB/s 0 0 00:06:21.755 0,0 41440/s 76 MiB/s 0 0 00:06:21.755 ==================================================================================== 00:06:21.755 Total 83040/s 324 MiB/s 0 0' 00:06:21.755 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:21.755 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:21.755 06:59:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:21.755 06:59:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:21.755 06:59:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.755 06:59:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.755 06:59:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.755 06:59:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.755 06:59:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.755 06:59:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.755 06:59:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.755 06:59:05 -- accel/accel.sh@42 -- # jq -r . 00:06:21.755 [2024-07-11 06:59:05.604798] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:21.755 [2024-07-11 06:59:05.604908] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59354 ] 00:06:21.755 [2024-07-11 06:59:05.740008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.014 [2024-07-11 06:59:05.830819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val= 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val= 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val= 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val=0x1 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val= 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val= 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val=decompress 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val= 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val=software 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val=32 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val=32 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val=2 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val=Yes 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val= 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:22.014 06:59:05 -- accel/accel.sh@21 -- # val= 00:06:22.014 06:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:22.014 06:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 06:59:07 -- accel/accel.sh@21 -- # val= 00:06:23.391 06:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 06:59:07 -- accel/accel.sh@21 -- # val= 00:06:23.391 06:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 06:59:07 -- accel/accel.sh@21 -- # val= 00:06:23.391 06:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 06:59:07 -- accel/accel.sh@21 -- # val= 00:06:23.391 06:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 06:59:07 -- accel/accel.sh@21 -- # val= 00:06:23.391 06:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 06:59:07 -- accel/accel.sh@21 -- # val= 00:06:23.391 06:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 06:59:07 -- accel/accel.sh@21 -- # val= 00:06:23.391 06:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 06:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 06:59:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.391 06:59:07 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:23.391 06:59:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.391 00:06:23.391 real 0m3.146s 00:06:23.391 user 0m2.651s 00:06:23.391 sys 0m0.291s 00:06:23.391 06:59:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.391 ************************************ 00:06:23.391 END TEST accel_decomp_mthread 00:06:23.391 ************************************ 00:06:23.391 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:23.391 06:59:07 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:23.391 06:59:07 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:23.391 06:59:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.391 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:23.391 ************************************ 00:06:23.391 START TEST accel_deomp_full_mthread 00:06:23.391 ************************************ 00:06:23.391 06:59:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:23.391 06:59:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.391 06:59:07 -- accel/accel.sh@17 -- # local accel_module 00:06:23.391 06:59:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:23.391 06:59:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:23.391 06:59:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.391 06:59:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.391 06:59:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.391 06:59:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.391 06:59:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.391 06:59:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.391 06:59:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.391 06:59:07 -- accel/accel.sh@42 -- # jq -r . 00:06:23.391 [2024-07-11 06:59:07.234377] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:23.391 [2024-07-11 06:59:07.234479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59394 ] 00:06:23.391 [2024-07-11 06:59:07.364700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.391 [2024-07-11 06:59:07.444847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.760 06:59:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:24.760 00:06:24.760 SPDK Configuration: 00:06:24.760 Core mask: 0x1 00:06:24.760 00:06:24.760 Accel Perf Configuration: 00:06:24.760 Workload Type: decompress 00:06:24.760 Transfer size: 111250 bytes 00:06:24.760 Vector count 1 00:06:24.760 Module: software 00:06:24.760 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:24.760 Queue depth: 32 00:06:24.760 Allocate depth: 32 00:06:24.760 # threads/core: 2 00:06:24.760 Run time: 1 seconds 00:06:24.760 Verify: Yes 00:06:24.760 00:06:24.760 Running for 1 seconds... 00:06:24.760 00:06:24.760 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.760 ------------------------------------------------------------------------------------ 00:06:24.760 0,1 2368/s 97 MiB/s 0 0 00:06:24.760 0,0 2336/s 96 MiB/s 0 0 00:06:24.760 ==================================================================================== 00:06:24.760 Total 4704/s 499 MiB/s 0 0' 00:06:24.760 06:59:08 -- accel/accel.sh@20 -- # IFS=: 00:06:24.760 06:59:08 -- accel/accel.sh@20 -- # read -r var val 00:06:24.760 06:59:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:24.760 06:59:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.760 06:59:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:24.760 06:59:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.760 06:59:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.760 06:59:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.760 06:59:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.760 06:59:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.760 06:59:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.760 06:59:08 -- accel/accel.sh@42 -- # jq -r . 00:06:24.760 [2024-07-11 06:59:08.804619] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:24.761 [2024-07-11 06:59:08.804691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59408 ] 00:06:25.019 [2024-07-11 06:59:08.935627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.019 [2024-07-11 06:59:09.012001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val= 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val= 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val= 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val=0x1 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val= 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val= 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val=decompress 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val= 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val=software 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val=32 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val=32 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val=2 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val=Yes 00:06:25.277 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.277 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.277 06:59:09 -- accel/accel.sh@21 -- # val= 00:06:25.278 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.278 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.278 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:25.278 06:59:09 -- accel/accel.sh@21 -- # val= 00:06:25.278 06:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.278 06:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:25.278 06:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:26.652 06:59:10 -- accel/accel.sh@21 -- # val= 00:06:26.652 06:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # IFS=: 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # read -r var val 00:06:26.652 06:59:10 -- accel/accel.sh@21 -- # val= 00:06:26.652 06:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # IFS=: 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # read -r var val 00:06:26.652 06:59:10 -- accel/accel.sh@21 -- # val= 00:06:26.652 06:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # IFS=: 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # read -r var val 00:06:26.652 06:59:10 -- accel/accel.sh@21 -- # val= 00:06:26.652 06:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # IFS=: 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # read -r var val 00:06:26.652 06:59:10 -- accel/accel.sh@21 -- # val= 00:06:26.652 06:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # IFS=: 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # read -r var val 00:06:26.652 06:59:10 -- accel/accel.sh@21 -- # val= 00:06:26.652 06:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # IFS=: 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # read -r var val 00:06:26.652 06:59:10 -- accel/accel.sh@21 -- # val= 00:06:26.652 06:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # IFS=: 00:06:26.652 06:59:10 -- accel/accel.sh@20 -- # read -r var val 00:06:26.652 06:59:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.652 06:59:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:26.653 06:59:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.653 00:06:26.653 real 0m3.126s 00:06:26.653 user 0m2.663s 00:06:26.653 sys 0m0.258s 00:06:26.653 ************************************ 00:06:26.653 END TEST accel_deomp_full_mthread 00:06:26.653 ************************************ 00:06:26.653 06:59:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.653 06:59:10 -- common/autotest_common.sh@10 -- # set +x 00:06:26.653 06:59:10 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:26.653 06:59:10 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:26.653 06:59:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:26.653 06:59:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.653 06:59:10 -- common/autotest_common.sh@10 -- # set +x 00:06:26.653 06:59:10 -- accel/accel.sh@129 -- # build_accel_config 00:06:26.653 06:59:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.653 06:59:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.653 06:59:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.653 06:59:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.653 06:59:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.653 06:59:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.653 06:59:10 -- accel/accel.sh@42 -- # jq -r . 00:06:26.653 ************************************ 00:06:26.653 START TEST accel_dif_functional_tests 00:06:26.653 ************************************ 00:06:26.653 06:59:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:26.653 [2024-07-11 06:59:10.428956] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:26.653 [2024-07-11 06:59:10.429029] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59449 ] 00:06:26.653 [2024-07-11 06:59:10.559608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.653 [2024-07-11 06:59:10.641603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.653 [2024-07-11 06:59:10.641739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.653 [2024-07-11 06:59:10.641743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.910 00:06:26.910 00:06:26.910 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.910 http://cunit.sourceforge.net/ 00:06:26.910 00:06:26.910 00:06:26.910 Suite: accel_dif 00:06:26.910 Test: verify: DIF generated, GUARD check ...passed 00:06:26.910 Test: verify: DIF generated, APPTAG check ...passed 00:06:26.910 Test: verify: DIF generated, REFTAG check ...passed 00:06:26.910 Test: verify: DIF not generated, GUARD check ...passed 00:06:26.910 Test: verify: DIF not generated, APPTAG check ...[2024-07-11 06:59:10.761203] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:26.910 [2024-07-11 06:59:10.761290] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:26.910 [2024-07-11 06:59:10.761343] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:26.910 passed 00:06:26.910 Test: verify: DIF not generated, REFTAG check ...passed 00:06:26.910 Test: verify: APPTAG correct, APPTAG check ...[2024-07-11 06:59:10.761367] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:26.910 [2024-07-11 06:59:10.761391] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:26.910 [2024-07-11 06:59:10.761412] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:26.910 passed 00:06:26.910 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:26.910 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:26.911 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-11 06:59:10.761575] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:26.911 passed 00:06:26.911 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:26.911 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:26.911 Test: generate copy: DIF generated, GUARD check ...[2024-07-11 06:59:10.761902] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:26.911 passed 00:06:26.911 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:26.911 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:26.911 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:26.911 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:26.911 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:26.911 Test: generate copy: iovecs-len validate ...passed 00:06:26.911 Test: generate copy: buffer alignment validate ...[2024-07-11 06:59:10.762540] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:26.911 passed 00:06:26.911 00:06:26.911 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.911 suites 1 1 n/a 0 0 00:06:26.911 tests 20 20 20 0 0 00:06:26.911 asserts 204 204 204 0 n/a 00:06:26.911 00:06:26.911 Elapsed time = 0.005 seconds 00:06:27.169 00:06:27.169 real 0m0.676s 00:06:27.169 user 0m0.974s 00:06:27.169 sys 0m0.189s 00:06:27.169 06:59:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.169 ************************************ 00:06:27.169 END TEST accel_dif_functional_tests 00:06:27.169 ************************************ 00:06:27.169 06:59:11 -- common/autotest_common.sh@10 -- # set +x 00:06:27.169 ************************************ 00:06:27.169 END TEST accel 00:06:27.169 ************************************ 00:06:27.169 00:06:27.169 real 1m5.093s 00:06:27.169 user 1m9.695s 00:06:27.169 sys 0m6.425s 00:06:27.169 06:59:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.169 06:59:11 -- common/autotest_common.sh@10 -- # set +x 00:06:27.169 06:59:11 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:27.169 06:59:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:27.169 06:59:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.169 06:59:11 -- common/autotest_common.sh@10 -- # set +x 00:06:27.169 ************************************ 00:06:27.169 START TEST accel_rpc 00:06:27.169 ************************************ 00:06:27.169 06:59:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:27.169 * Looking for test storage... 00:06:27.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:27.169 06:59:11 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.169 06:59:11 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59518 00:06:27.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.169 06:59:11 -- accel/accel_rpc.sh@15 -- # waitforlisten 59518 00:06:27.169 06:59:11 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:27.169 06:59:11 -- common/autotest_common.sh@819 -- # '[' -z 59518 ']' 00:06:27.169 06:59:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.169 06:59:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.169 06:59:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.169 06:59:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.169 06:59:11 -- common/autotest_common.sh@10 -- # set +x 00:06:27.427 [2024-07-11 06:59:11.290536] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:27.427 [2024-07-11 06:59:11.290927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59518 ] 00:06:27.427 [2024-07-11 06:59:11.426311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.685 [2024-07-11 06:59:11.512567] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.685 [2024-07-11 06:59:11.513068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.252 06:59:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.252 06:59:12 -- common/autotest_common.sh@852 -- # return 0 00:06:28.252 06:59:12 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:28.252 06:59:12 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:28.252 06:59:12 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:28.252 06:59:12 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:28.252 06:59:12 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:28.252 06:59:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:28.252 06:59:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.252 06:59:12 -- common/autotest_common.sh@10 -- # set +x 00:06:28.252 ************************************ 00:06:28.252 START TEST accel_assign_opcode 00:06:28.252 ************************************ 00:06:28.252 06:59:12 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:28.252 06:59:12 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:28.252 06:59:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:28.252 06:59:12 -- common/autotest_common.sh@10 -- # set +x 00:06:28.252 [2024-07-11 06:59:12.285723] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:28.252 06:59:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:28.252 06:59:12 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:28.252 06:59:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:28.252 06:59:12 -- common/autotest_common.sh@10 -- # set +x 00:06:28.252 [2024-07-11 06:59:12.293718] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:28.252 06:59:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:28.252 06:59:12 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:28.252 06:59:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:28.252 06:59:12 -- common/autotest_common.sh@10 -- # set +x 00:06:28.819 06:59:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:28.819 06:59:12 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:28.819 06:59:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:28.819 06:59:12 -- common/autotest_common.sh@10 -- # set +x 00:06:28.819 06:59:12 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:28.819 06:59:12 -- accel/accel_rpc.sh@42 -- # grep software 00:06:28.819 06:59:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:28.819 software 00:06:28.819 ************************************ 00:06:28.819 END TEST accel_assign_opcode 00:06:28.819 ************************************ 00:06:28.819 00:06:28.819 real 0m0.350s 00:06:28.819 user 0m0.053s 00:06:28.819 sys 0m0.011s 00:06:28.819 06:59:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.819 06:59:12 -- common/autotest_common.sh@10 -- # set +x 00:06:28.819 06:59:12 -- accel/accel_rpc.sh@55 -- # killprocess 59518 00:06:28.819 06:59:12 -- common/autotest_common.sh@926 -- # '[' -z 59518 ']' 00:06:28.819 06:59:12 -- common/autotest_common.sh@930 -- # kill -0 59518 00:06:28.819 06:59:12 -- common/autotest_common.sh@931 -- # uname 00:06:28.819 06:59:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:28.819 06:59:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59518 00:06:28.819 killing process with pid 59518 00:06:28.819 06:59:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:28.819 06:59:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:28.819 06:59:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59518' 00:06:28.819 06:59:12 -- common/autotest_common.sh@945 -- # kill 59518 00:06:28.819 06:59:12 -- common/autotest_common.sh@950 -- # wait 59518 00:06:29.385 ************************************ 00:06:29.385 END TEST accel_rpc 00:06:29.385 ************************************ 00:06:29.385 00:06:29.385 real 0m2.076s 00:06:29.385 user 0m2.152s 00:06:29.385 sys 0m0.477s 00:06:29.385 06:59:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.385 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:06:29.385 06:59:13 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:29.385 06:59:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:29.385 06:59:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.385 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:06:29.385 ************************************ 00:06:29.385 START TEST app_cmdline 00:06:29.385 ************************************ 00:06:29.385 06:59:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:29.385 * Looking for test storage... 00:06:29.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:29.385 06:59:13 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:29.385 06:59:13 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59628 00:06:29.385 06:59:13 -- app/cmdline.sh@18 -- # waitforlisten 59628 00:06:29.385 06:59:13 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:29.385 06:59:13 -- common/autotest_common.sh@819 -- # '[' -z 59628 ']' 00:06:29.385 06:59:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.385 06:59:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.385 06:59:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.385 06:59:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.385 06:59:13 -- common/autotest_common.sh@10 -- # set +x 00:06:29.385 [2024-07-11 06:59:13.426148] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:29.385 [2024-07-11 06:59:13.426253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59628 ] 00:06:29.643 [2024-07-11 06:59:13.560759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.643 [2024-07-11 06:59:13.647298] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.643 [2024-07-11 06:59:13.647496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.587 06:59:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:30.587 06:59:14 -- common/autotest_common.sh@852 -- # return 0 00:06:30.587 06:59:14 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:30.846 { 00:06:30.846 "fields": { 00:06:30.846 "commit": "4b94202c6", 00:06:30.846 "major": 24, 00:06:30.846 "minor": 1, 00:06:30.846 "patch": 1, 00:06:30.846 "suffix": "-pre" 00:06:30.846 }, 00:06:30.846 "version": "SPDK v24.01.1-pre git sha1 4b94202c6" 00:06:30.846 } 00:06:30.846 06:59:14 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:30.846 06:59:14 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:30.846 06:59:14 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:30.846 06:59:14 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:30.846 06:59:14 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:30.846 06:59:14 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:30.846 06:59:14 -- app/cmdline.sh@26 -- # sort 00:06:30.846 06:59:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:30.846 06:59:14 -- common/autotest_common.sh@10 -- # set +x 00:06:30.846 06:59:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:30.846 06:59:14 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:30.846 06:59:14 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:30.846 06:59:14 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.846 06:59:14 -- common/autotest_common.sh@640 -- # local es=0 00:06:30.846 06:59:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.846 06:59:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.846 06:59:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:30.846 06:59:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.846 06:59:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:30.846 06:59:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.846 06:59:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:30.846 06:59:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.846 06:59:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:30.846 06:59:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:31.103 2024/07/11 06:59:14 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:31.103 request: 00:06:31.103 { 00:06:31.103 "method": "env_dpdk_get_mem_stats", 00:06:31.103 "params": {} 00:06:31.103 } 00:06:31.103 Got JSON-RPC error response 00:06:31.103 GoRPCClient: error on JSON-RPC call 00:06:31.103 06:59:14 -- common/autotest_common.sh@643 -- # es=1 00:06:31.104 06:59:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:31.104 06:59:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:31.104 06:59:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:31.104 06:59:14 -- app/cmdline.sh@1 -- # killprocess 59628 00:06:31.104 06:59:14 -- common/autotest_common.sh@926 -- # '[' -z 59628 ']' 00:06:31.104 06:59:14 -- common/autotest_common.sh@930 -- # kill -0 59628 00:06:31.104 06:59:14 -- common/autotest_common.sh@931 -- # uname 00:06:31.104 06:59:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.104 06:59:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59628 00:06:31.104 06:59:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:31.104 06:59:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:31.104 killing process with pid 59628 00:06:31.104 06:59:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59628' 00:06:31.104 06:59:14 -- common/autotest_common.sh@945 -- # kill 59628 00:06:31.104 06:59:14 -- common/autotest_common.sh@950 -- # wait 59628 00:06:31.670 00:06:31.670 real 0m2.214s 00:06:31.670 user 0m2.622s 00:06:31.670 sys 0m0.556s 00:06:31.670 06:59:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.670 ************************************ 00:06:31.670 END TEST app_cmdline 00:06:31.670 ************************************ 00:06:31.670 06:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:31.670 06:59:15 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.670 06:59:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:31.670 06:59:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.670 06:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:31.670 ************************************ 00:06:31.670 START TEST version 00:06:31.670 ************************************ 00:06:31.670 06:59:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.670 * Looking for test storage... 00:06:31.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:31.670 06:59:15 -- app/version.sh@17 -- # get_header_version major 00:06:31.670 06:59:15 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.670 06:59:15 -- app/version.sh@14 -- # cut -f2 00:06:31.670 06:59:15 -- app/version.sh@14 -- # tr -d '"' 00:06:31.670 06:59:15 -- app/version.sh@17 -- # major=24 00:06:31.670 06:59:15 -- app/version.sh@18 -- # get_header_version minor 00:06:31.670 06:59:15 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.670 06:59:15 -- app/version.sh@14 -- # cut -f2 00:06:31.670 06:59:15 -- app/version.sh@14 -- # tr -d '"' 00:06:31.670 06:59:15 -- app/version.sh@18 -- # minor=1 00:06:31.670 06:59:15 -- app/version.sh@19 -- # get_header_version patch 00:06:31.670 06:59:15 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.670 06:59:15 -- app/version.sh@14 -- # cut -f2 00:06:31.670 06:59:15 -- app/version.sh@14 -- # tr -d '"' 00:06:31.670 06:59:15 -- app/version.sh@19 -- # patch=1 00:06:31.670 06:59:15 -- app/version.sh@20 -- # get_header_version suffix 00:06:31.670 06:59:15 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.670 06:59:15 -- app/version.sh@14 -- # tr -d '"' 00:06:31.670 06:59:15 -- app/version.sh@14 -- # cut -f2 00:06:31.670 06:59:15 -- app/version.sh@20 -- # suffix=-pre 00:06:31.670 06:59:15 -- app/version.sh@22 -- # version=24.1 00:06:31.670 06:59:15 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:31.670 06:59:15 -- app/version.sh@25 -- # version=24.1.1 00:06:31.670 06:59:15 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:31.670 06:59:15 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.670 06:59:15 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:31.670 06:59:15 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:31.670 06:59:15 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:31.670 00:06:31.670 real 0m0.144s 00:06:31.670 user 0m0.082s 00:06:31.670 sys 0m0.095s 00:06:31.670 06:59:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.670 06:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:31.670 ************************************ 00:06:31.670 END TEST version 00:06:31.670 ************************************ 00:06:31.929 06:59:15 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:31.929 06:59:15 -- spdk/autotest.sh@204 -- # uname -s 00:06:31.929 06:59:15 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:31.929 06:59:15 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:31.929 06:59:15 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:31.929 06:59:15 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:31.929 06:59:15 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:31.929 06:59:15 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:31.929 06:59:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:31.929 06:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:31.929 06:59:15 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:31.929 06:59:15 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:31.929 06:59:15 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:31.929 06:59:15 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:31.929 06:59:15 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:31.929 06:59:15 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:31.929 06:59:15 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:31.929 06:59:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:31.929 06:59:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.929 06:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:31.929 ************************************ 00:06:31.929 START TEST nvmf_tcp 00:06:31.929 ************************************ 00:06:31.929 06:59:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:31.929 * Looking for test storage... 00:06:31.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:31.929 06:59:15 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:31.929 06:59:15 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:31.929 06:59:15 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:31.929 06:59:15 -- nvmf/common.sh@7 -- # uname -s 00:06:31.929 06:59:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.929 06:59:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.929 06:59:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.929 06:59:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.929 06:59:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.929 06:59:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.929 06:59:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.929 06:59:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.929 06:59:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.929 06:59:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.929 06:59:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:06:31.929 06:59:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:06:31.929 06:59:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.929 06:59:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.929 06:59:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:31.929 06:59:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.929 06:59:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.929 06:59:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.929 06:59:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.929 06:59:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.929 06:59:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.929 06:59:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.929 06:59:15 -- paths/export.sh@5 -- # export PATH 00:06:31.929 06:59:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.929 06:59:15 -- nvmf/common.sh@46 -- # : 0 00:06:31.929 06:59:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:31.929 06:59:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:31.929 06:59:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:31.929 06:59:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.929 06:59:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.929 06:59:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:31.929 06:59:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:31.929 06:59:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:31.929 06:59:15 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:31.929 06:59:15 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:31.929 06:59:15 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:31.929 06:59:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:31.929 06:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:31.929 06:59:15 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:31.930 06:59:15 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:31.930 06:59:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:31.930 06:59:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.930 06:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:31.930 ************************************ 00:06:31.930 START TEST nvmf_example 00:06:31.930 ************************************ 00:06:31.930 06:59:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:31.930 * Looking for test storage... 00:06:31.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:31.930 06:59:15 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:31.930 06:59:15 -- nvmf/common.sh@7 -- # uname -s 00:06:32.188 06:59:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.188 06:59:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.188 06:59:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.188 06:59:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.188 06:59:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.188 06:59:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.188 06:59:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.188 06:59:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.188 06:59:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.188 06:59:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.188 06:59:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:06:32.188 06:59:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:06:32.188 06:59:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.188 06:59:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.188 06:59:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:32.188 06:59:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:32.188 06:59:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.188 06:59:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.188 06:59:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.188 06:59:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.188 06:59:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.189 06:59:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.189 06:59:16 -- paths/export.sh@5 -- # export PATH 00:06:32.189 06:59:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.189 06:59:16 -- nvmf/common.sh@46 -- # : 0 00:06:32.189 06:59:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:32.189 06:59:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:32.189 06:59:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:32.189 06:59:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.189 06:59:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.189 06:59:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:32.189 06:59:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:32.189 06:59:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:32.189 06:59:16 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:32.189 06:59:16 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:32.189 06:59:16 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:32.189 06:59:16 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:32.189 06:59:16 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:32.189 06:59:16 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:32.189 06:59:16 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:32.189 06:59:16 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:32.189 06:59:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:32.189 06:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:32.189 06:59:16 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:32.189 06:59:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:32.189 06:59:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.189 06:59:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:32.189 06:59:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:32.189 06:59:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:32.189 06:59:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.189 06:59:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:32.189 06:59:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.189 06:59:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:32.189 06:59:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:32.189 06:59:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:32.189 06:59:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:32.189 06:59:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:32.189 06:59:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:32.189 06:59:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.189 06:59:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.189 06:59:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:32.189 06:59:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:32.189 06:59:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:32.189 06:59:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:32.189 06:59:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:32.189 06:59:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.189 06:59:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:32.189 06:59:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:32.189 06:59:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:32.189 06:59:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:32.189 06:59:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:32.189 Cannot find device "nvmf_init_br" 00:06:32.189 06:59:16 -- nvmf/common.sh@153 -- # true 00:06:32.189 06:59:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:32.189 Cannot find device "nvmf_tgt_br" 00:06:32.189 06:59:16 -- nvmf/common.sh@154 -- # true 00:06:32.189 06:59:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:32.189 Cannot find device "nvmf_tgt_br2" 00:06:32.189 06:59:16 -- nvmf/common.sh@155 -- # true 00:06:32.189 06:59:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:32.189 Cannot find device "nvmf_init_br" 00:06:32.189 06:59:16 -- nvmf/common.sh@156 -- # true 00:06:32.189 06:59:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:32.189 Cannot find device "nvmf_tgt_br" 00:06:32.189 06:59:16 -- nvmf/common.sh@157 -- # true 00:06:32.189 06:59:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:32.189 Cannot find device "nvmf_tgt_br2" 00:06:32.189 06:59:16 -- nvmf/common.sh@158 -- # true 00:06:32.189 06:59:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:32.189 Cannot find device "nvmf_br" 00:06:32.189 06:59:16 -- nvmf/common.sh@159 -- # true 00:06:32.189 06:59:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:32.189 Cannot find device "nvmf_init_if" 00:06:32.189 06:59:16 -- nvmf/common.sh@160 -- # true 00:06:32.189 06:59:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:32.189 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:32.189 06:59:16 -- nvmf/common.sh@161 -- # true 00:06:32.189 06:59:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:32.189 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:32.189 06:59:16 -- nvmf/common.sh@162 -- # true 00:06:32.189 06:59:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:32.189 06:59:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:32.189 06:59:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:32.189 06:59:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:32.189 06:59:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:32.189 06:59:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:32.189 06:59:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:32.189 06:59:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:32.189 06:59:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:32.189 06:59:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:32.189 06:59:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:32.189 06:59:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:32.189 06:59:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:32.189 06:59:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:32.189 06:59:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:32.189 06:59:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:32.448 06:59:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:32.448 06:59:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:32.448 06:59:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:32.448 06:59:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:32.448 06:59:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:32.448 06:59:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:32.448 06:59:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:32.448 06:59:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:32.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:06:32.448 00:06:32.448 --- 10.0.0.2 ping statistics --- 00:06:32.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.448 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:06:32.448 06:59:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:32.448 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:32.448 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:06:32.448 00:06:32.448 --- 10.0.0.3 ping statistics --- 00:06:32.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.448 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:06:32.448 06:59:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:32.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:06:32.448 00:06:32.448 --- 10.0.0.1 ping statistics --- 00:06:32.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.448 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:06:32.448 06:59:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.448 06:59:16 -- nvmf/common.sh@421 -- # return 0 00:06:32.448 06:59:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:32.448 06:59:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.448 06:59:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:32.448 06:59:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:32.448 06:59:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.448 06:59:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:32.448 06:59:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:32.448 06:59:16 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:32.448 06:59:16 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:32.448 06:59:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:32.448 06:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:32.448 06:59:16 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:32.448 06:59:16 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:32.448 06:59:16 -- target/nvmf_example.sh@34 -- # nvmfpid=59977 00:06:32.448 06:59:16 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:32.448 06:59:16 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:32.448 06:59:16 -- target/nvmf_example.sh@36 -- # waitforlisten 59977 00:06:32.448 06:59:16 -- common/autotest_common.sh@819 -- # '[' -z 59977 ']' 00:06:32.448 06:59:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.448 06:59:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.448 06:59:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.448 06:59:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.448 06:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:33.384 06:59:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.384 06:59:17 -- common/autotest_common.sh@852 -- # return 0 00:06:33.384 06:59:17 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:33.384 06:59:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:33.384 06:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:33.642 06:59:17 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:33.642 06:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:33.642 06:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:33.642 06:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:33.642 06:59:17 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:33.642 06:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:33.642 06:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:33.642 06:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:33.642 06:59:17 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:33.642 06:59:17 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:33.642 06:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:33.642 06:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:33.642 06:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:33.642 06:59:17 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:33.642 06:59:17 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:33.642 06:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:33.642 06:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:33.642 06:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:33.642 06:59:17 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:33.642 06:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:33.642 06:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:33.642 06:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:33.642 06:59:17 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:33.642 06:59:17 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:45.849 Initializing NVMe Controllers 00:06:45.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:45.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:45.849 Initialization complete. Launching workers. 00:06:45.849 ======================================================== 00:06:45.849 Latency(us) 00:06:45.849 Device Information : IOPS MiB/s Average min max 00:06:45.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14530.45 56.76 4404.51 668.85 24467.60 00:06:45.849 ======================================================== 00:06:45.849 Total : 14530.45 56.76 4404.51 668.85 24467.60 00:06:45.849 00:06:45.849 06:59:27 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:45.849 06:59:27 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:45.849 06:59:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:45.849 06:59:27 -- nvmf/common.sh@116 -- # sync 00:06:45.849 06:59:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:45.849 06:59:27 -- nvmf/common.sh@119 -- # set +e 00:06:45.849 06:59:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:45.849 06:59:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:45.849 rmmod nvme_tcp 00:06:45.849 rmmod nvme_fabrics 00:06:45.849 rmmod nvme_keyring 00:06:45.849 06:59:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:45.849 06:59:27 -- nvmf/common.sh@123 -- # set -e 00:06:45.849 06:59:27 -- nvmf/common.sh@124 -- # return 0 00:06:45.849 06:59:27 -- nvmf/common.sh@477 -- # '[' -n 59977 ']' 00:06:45.849 06:59:27 -- nvmf/common.sh@478 -- # killprocess 59977 00:06:45.849 06:59:27 -- common/autotest_common.sh@926 -- # '[' -z 59977 ']' 00:06:45.849 06:59:27 -- common/autotest_common.sh@930 -- # kill -0 59977 00:06:45.849 06:59:27 -- common/autotest_common.sh@931 -- # uname 00:06:45.849 06:59:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:45.849 06:59:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59977 00:06:45.849 killing process with pid 59977 00:06:45.849 06:59:27 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:06:45.849 06:59:27 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:06:45.849 06:59:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59977' 00:06:45.849 06:59:27 -- common/autotest_common.sh@945 -- # kill 59977 00:06:45.849 06:59:27 -- common/autotest_common.sh@950 -- # wait 59977 00:06:45.849 nvmf threads initialize successfully 00:06:45.849 bdev subsystem init successfully 00:06:45.849 created a nvmf target service 00:06:45.849 create targets's poll groups done 00:06:45.849 all subsystems of target started 00:06:45.849 nvmf target is running 00:06:45.849 all subsystems of target stopped 00:06:45.849 destroy targets's poll groups done 00:06:45.849 destroyed the nvmf target service 00:06:45.849 bdev subsystem finish successfully 00:06:45.849 nvmf threads destroy successfully 00:06:45.849 06:59:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:45.849 06:59:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:45.849 06:59:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:45.849 06:59:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:45.849 06:59:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:45.849 06:59:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.849 06:59:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.849 06:59:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.849 06:59:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:06:45.849 06:59:28 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:45.849 06:59:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:45.849 06:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:45.849 00:06:45.849 real 0m12.287s 00:06:45.849 user 0m43.984s 00:06:45.849 sys 0m1.972s 00:06:45.849 06:59:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.849 06:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:45.849 ************************************ 00:06:45.849 END TEST nvmf_example 00:06:45.849 ************************************ 00:06:45.849 06:59:28 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:45.849 06:59:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:45.849 06:59:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.849 06:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:45.849 ************************************ 00:06:45.849 START TEST nvmf_filesystem 00:06:45.849 ************************************ 00:06:45.849 06:59:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:45.849 * Looking for test storage... 00:06:45.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:45.849 06:59:28 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:45.849 06:59:28 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:45.849 06:59:28 -- common/autotest_common.sh@34 -- # set -e 00:06:45.850 06:59:28 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:45.850 06:59:28 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:45.850 06:59:28 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:45.850 06:59:28 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:45.850 06:59:28 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:45.850 06:59:28 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:45.850 06:59:28 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:45.850 06:59:28 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:45.850 06:59:28 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:45.850 06:59:28 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:45.850 06:59:28 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:45.850 06:59:28 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:45.850 06:59:28 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:45.850 06:59:28 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:45.850 06:59:28 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:45.850 06:59:28 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:45.850 06:59:28 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:45.850 06:59:28 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:45.850 06:59:28 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:45.850 06:59:28 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:45.850 06:59:28 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:45.850 06:59:28 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:45.850 06:59:28 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:45.850 06:59:28 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:45.850 06:59:28 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:45.850 06:59:28 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:45.850 06:59:28 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:45.850 06:59:28 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:45.850 06:59:28 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:45.850 06:59:28 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:45.850 06:59:28 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:45.850 06:59:28 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:45.850 06:59:28 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:45.850 06:59:28 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:45.850 06:59:28 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:45.850 06:59:28 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:45.850 06:59:28 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:45.850 06:59:28 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:45.850 06:59:28 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:45.850 06:59:28 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:45.850 06:59:28 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:45.850 06:59:28 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:45.850 06:59:28 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:45.850 06:59:28 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:45.850 06:59:28 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:45.850 06:59:28 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:45.850 06:59:28 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:45.850 06:59:28 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:45.850 06:59:28 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:45.850 06:59:28 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:45.850 06:59:28 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:45.850 06:59:28 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:45.850 06:59:28 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:45.850 06:59:28 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:45.850 06:59:28 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:45.850 06:59:28 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:45.850 06:59:28 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:06:45.850 06:59:28 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:45.850 06:59:28 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:45.850 06:59:28 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:45.850 06:59:28 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:45.850 06:59:28 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:06:45.850 06:59:28 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:45.850 06:59:28 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:45.850 06:59:28 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:45.850 06:59:28 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:45.850 06:59:28 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:45.850 06:59:28 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:45.850 06:59:28 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:45.850 06:59:28 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:45.850 06:59:28 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:45.850 06:59:28 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:06:45.850 06:59:28 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:45.850 06:59:28 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:45.850 06:59:28 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:45.850 06:59:28 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:45.850 06:59:28 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:45.850 06:59:28 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:45.850 06:59:28 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:45.850 06:59:28 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:45.850 06:59:28 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:45.850 06:59:28 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:45.850 06:59:28 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:06:45.850 06:59:28 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:45.850 06:59:28 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:45.850 06:59:28 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:45.850 06:59:28 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:45.850 06:59:28 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:45.850 06:59:28 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:45.850 06:59:28 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:45.850 06:59:28 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:45.850 06:59:28 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:45.850 06:59:28 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:45.850 06:59:28 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:45.850 06:59:28 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:45.850 06:59:28 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:45.850 06:59:28 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:45.850 06:59:28 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:45.850 06:59:28 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:45.850 #define SPDK_CONFIG_H 00:06:45.850 #define SPDK_CONFIG_APPS 1 00:06:45.850 #define SPDK_CONFIG_ARCH native 00:06:45.850 #undef SPDK_CONFIG_ASAN 00:06:45.850 #define SPDK_CONFIG_AVAHI 1 00:06:45.850 #undef SPDK_CONFIG_CET 00:06:45.850 #define SPDK_CONFIG_COVERAGE 1 00:06:45.850 #define SPDK_CONFIG_CROSS_PREFIX 00:06:45.850 #undef SPDK_CONFIG_CRYPTO 00:06:45.850 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:45.850 #undef SPDK_CONFIG_CUSTOMOCF 00:06:45.850 #undef SPDK_CONFIG_DAOS 00:06:45.850 #define SPDK_CONFIG_DAOS_DIR 00:06:45.850 #define SPDK_CONFIG_DEBUG 1 00:06:45.850 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:45.850 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:45.850 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:45.850 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:45.850 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:45.850 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:45.850 #define SPDK_CONFIG_EXAMPLES 1 00:06:45.850 #undef SPDK_CONFIG_FC 00:06:45.850 #define SPDK_CONFIG_FC_PATH 00:06:45.850 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:45.850 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:45.850 #undef SPDK_CONFIG_FUSE 00:06:45.850 #undef SPDK_CONFIG_FUZZER 00:06:45.850 #define SPDK_CONFIG_FUZZER_LIB 00:06:45.850 #define SPDK_CONFIG_GOLANG 1 00:06:45.850 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:45.850 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:45.850 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:45.850 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:45.850 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:45.850 #define SPDK_CONFIG_IDXD 1 00:06:45.850 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:45.850 #undef SPDK_CONFIG_IPSEC_MB 00:06:45.850 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:45.850 #define SPDK_CONFIG_ISAL 1 00:06:45.850 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:45.850 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:45.850 #define SPDK_CONFIG_LIBDIR 00:06:45.850 #undef SPDK_CONFIG_LTO 00:06:45.850 #define SPDK_CONFIG_MAX_LCORES 00:06:45.850 #define SPDK_CONFIG_NVME_CUSE 1 00:06:45.850 #undef SPDK_CONFIG_OCF 00:06:45.850 #define SPDK_CONFIG_OCF_PATH 00:06:45.850 #define SPDK_CONFIG_OPENSSL_PATH 00:06:45.850 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:45.850 #undef SPDK_CONFIG_PGO_USE 00:06:45.850 #define SPDK_CONFIG_PREFIX /usr/local 00:06:45.850 #undef SPDK_CONFIG_RAID5F 00:06:45.850 #undef SPDK_CONFIG_RBD 00:06:45.850 #define SPDK_CONFIG_RDMA 1 00:06:45.850 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:45.850 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:45.850 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:45.850 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:45.850 #define SPDK_CONFIG_SHARED 1 00:06:45.850 #undef SPDK_CONFIG_SMA 00:06:45.850 #define SPDK_CONFIG_TESTS 1 00:06:45.850 #undef SPDK_CONFIG_TSAN 00:06:45.850 #define SPDK_CONFIG_UBLK 1 00:06:45.850 #define SPDK_CONFIG_UBSAN 1 00:06:45.850 #undef SPDK_CONFIG_UNIT_TESTS 00:06:45.850 #undef SPDK_CONFIG_URING 00:06:45.850 #define SPDK_CONFIG_URING_PATH 00:06:45.850 #undef SPDK_CONFIG_URING_ZNS 00:06:45.850 #define SPDK_CONFIG_USDT 1 00:06:45.850 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:45.850 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:45.850 #define SPDK_CONFIG_VFIO_USER 1 00:06:45.850 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:45.850 #define SPDK_CONFIG_VHOST 1 00:06:45.850 #define SPDK_CONFIG_VIRTIO 1 00:06:45.850 #undef SPDK_CONFIG_VTUNE 00:06:45.850 #define SPDK_CONFIG_VTUNE_DIR 00:06:45.850 #define SPDK_CONFIG_WERROR 1 00:06:45.850 #define SPDK_CONFIG_WPDK_DIR 00:06:45.850 #undef SPDK_CONFIG_XNVME 00:06:45.850 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:45.850 06:59:28 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:45.850 06:59:28 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.850 06:59:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.851 06:59:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.851 06:59:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.851 06:59:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.851 06:59:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.851 06:59:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.851 06:59:28 -- paths/export.sh@5 -- # export PATH 00:06:45.851 06:59:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.851 06:59:28 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:45.851 06:59:28 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:45.851 06:59:28 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:45.851 06:59:28 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:45.851 06:59:28 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:45.851 06:59:28 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:45.851 06:59:28 -- pm/common@16 -- # TEST_TAG=N/A 00:06:45.851 06:59:28 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:45.851 06:59:28 -- common/autotest_common.sh@52 -- # : 1 00:06:45.851 06:59:28 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:06:45.851 06:59:28 -- common/autotest_common.sh@56 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:45.851 06:59:28 -- common/autotest_common.sh@58 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:06:45.851 06:59:28 -- common/autotest_common.sh@60 -- # : 1 00:06:45.851 06:59:28 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:45.851 06:59:28 -- common/autotest_common.sh@62 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:06:45.851 06:59:28 -- common/autotest_common.sh@64 -- # : 00:06:45.851 06:59:28 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:06:45.851 06:59:28 -- common/autotest_common.sh@66 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:06:45.851 06:59:28 -- common/autotest_common.sh@68 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:06:45.851 06:59:28 -- common/autotest_common.sh@70 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:06:45.851 06:59:28 -- common/autotest_common.sh@72 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:45.851 06:59:28 -- common/autotest_common.sh@74 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:06:45.851 06:59:28 -- common/autotest_common.sh@76 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:06:45.851 06:59:28 -- common/autotest_common.sh@78 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:06:45.851 06:59:28 -- common/autotest_common.sh@80 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:06:45.851 06:59:28 -- common/autotest_common.sh@82 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:06:45.851 06:59:28 -- common/autotest_common.sh@84 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:06:45.851 06:59:28 -- common/autotest_common.sh@86 -- # : 1 00:06:45.851 06:59:28 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:06:45.851 06:59:28 -- common/autotest_common.sh@88 -- # : 1 00:06:45.851 06:59:28 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:06:45.851 06:59:28 -- common/autotest_common.sh@90 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:45.851 06:59:28 -- common/autotest_common.sh@92 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:06:45.851 06:59:28 -- common/autotest_common.sh@94 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:06:45.851 06:59:28 -- common/autotest_common.sh@96 -- # : tcp 00:06:45.851 06:59:28 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:45.851 06:59:28 -- common/autotest_common.sh@98 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:06:45.851 06:59:28 -- common/autotest_common.sh@100 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:06:45.851 06:59:28 -- common/autotest_common.sh@102 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:06:45.851 06:59:28 -- common/autotest_common.sh@104 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:06:45.851 06:59:28 -- common/autotest_common.sh@106 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:06:45.851 06:59:28 -- common/autotest_common.sh@108 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:06:45.851 06:59:28 -- common/autotest_common.sh@110 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:06:45.851 06:59:28 -- common/autotest_common.sh@112 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:45.851 06:59:28 -- common/autotest_common.sh@114 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:06:45.851 06:59:28 -- common/autotest_common.sh@116 -- # : 1 00:06:45.851 06:59:28 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:06:45.851 06:59:28 -- common/autotest_common.sh@118 -- # : 00:06:45.851 06:59:28 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:45.851 06:59:28 -- common/autotest_common.sh@120 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:06:45.851 06:59:28 -- common/autotest_common.sh@122 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:06:45.851 06:59:28 -- common/autotest_common.sh@124 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:06:45.851 06:59:28 -- common/autotest_common.sh@126 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:06:45.851 06:59:28 -- common/autotest_common.sh@128 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:06:45.851 06:59:28 -- common/autotest_common.sh@130 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:06:45.851 06:59:28 -- common/autotest_common.sh@132 -- # : 00:06:45.851 06:59:28 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:06:45.851 06:59:28 -- common/autotest_common.sh@134 -- # : true 00:06:45.851 06:59:28 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:06:45.851 06:59:28 -- common/autotest_common.sh@136 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:06:45.851 06:59:28 -- common/autotest_common.sh@138 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:06:45.851 06:59:28 -- common/autotest_common.sh@140 -- # : 1 00:06:45.851 06:59:28 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:06:45.851 06:59:28 -- common/autotest_common.sh@142 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:06:45.851 06:59:28 -- common/autotest_common.sh@144 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:06:45.851 06:59:28 -- common/autotest_common.sh@146 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:06:45.851 06:59:28 -- common/autotest_common.sh@148 -- # : 00:06:45.851 06:59:28 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:06:45.851 06:59:28 -- common/autotest_common.sh@150 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:06:45.851 06:59:28 -- common/autotest_common.sh@152 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:06:45.851 06:59:28 -- common/autotest_common.sh@154 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:06:45.851 06:59:28 -- common/autotest_common.sh@156 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:06:45.851 06:59:28 -- common/autotest_common.sh@158 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:06:45.851 06:59:28 -- common/autotest_common.sh@160 -- # : 0 00:06:45.851 06:59:28 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:06:45.851 06:59:28 -- common/autotest_common.sh@163 -- # : 00:06:45.851 06:59:28 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:06:45.851 06:59:28 -- common/autotest_common.sh@165 -- # : 1 00:06:45.851 06:59:28 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:06:45.851 06:59:28 -- common/autotest_common.sh@167 -- # : 1 00:06:45.851 06:59:28 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:45.851 06:59:28 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:45.851 06:59:28 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:45.851 06:59:28 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:45.851 06:59:28 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:45.851 06:59:28 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:45.851 06:59:28 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:45.851 06:59:28 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:45.852 06:59:28 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:45.852 06:59:28 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:45.852 06:59:28 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:45.852 06:59:28 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:45.852 06:59:28 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:45.852 06:59:28 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:45.852 06:59:28 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:06:45.852 06:59:28 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:45.852 06:59:28 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:45.852 06:59:28 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:45.852 06:59:28 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:45.852 06:59:28 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:45.852 06:59:28 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:06:45.852 06:59:28 -- common/autotest_common.sh@196 -- # cat 00:06:45.852 06:59:28 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:06:45.852 06:59:28 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:45.852 06:59:28 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:45.852 06:59:28 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:45.852 06:59:28 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:45.852 06:59:28 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:06:45.852 06:59:28 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:06:45.852 06:59:28 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:45.852 06:59:28 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:45.852 06:59:28 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:45.852 06:59:28 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:45.852 06:59:28 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:45.852 06:59:28 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:45.852 06:59:28 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:45.852 06:59:28 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:45.852 06:59:28 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:45.852 06:59:28 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:45.852 06:59:28 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:45.852 06:59:28 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:45.852 06:59:28 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:06:45.852 06:59:28 -- common/autotest_common.sh@249 -- # export valgrind= 00:06:45.852 06:59:28 -- common/autotest_common.sh@249 -- # valgrind= 00:06:45.852 06:59:28 -- common/autotest_common.sh@255 -- # uname -s 00:06:45.852 06:59:28 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:06:45.852 06:59:28 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:06:45.852 06:59:28 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:06:45.852 06:59:28 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:06:45.852 06:59:28 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:45.852 06:59:28 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:45.852 06:59:28 -- common/autotest_common.sh@265 -- # MAKE=make 00:06:45.852 06:59:28 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:06:45.852 06:59:28 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:06:45.852 06:59:28 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:06:45.852 06:59:28 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:45.852 06:59:28 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:06:45.852 06:59:28 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:06:45.852 06:59:28 -- common/autotest_common.sh@291 -- # for i in "$@" 00:06:45.852 06:59:28 -- common/autotest_common.sh@292 -- # case "$i" in 00:06:45.852 06:59:28 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:06:45.852 06:59:28 -- common/autotest_common.sh@309 -- # [[ -z 60228 ]] 00:06:45.852 06:59:28 -- common/autotest_common.sh@309 -- # kill -0 60228 00:06:45.852 06:59:28 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:06:45.852 06:59:28 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:06:45.852 06:59:28 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:06:45.852 06:59:28 -- common/autotest_common.sh@322 -- # local mount target_dir 00:06:45.852 06:59:28 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:06:45.852 06:59:28 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:06:45.852 06:59:28 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:06:45.852 06:59:28 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:06:45.852 06:59:28 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.EO3LGF 00:06:45.852 06:59:28 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:45.852 06:59:28 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:06:45.852 06:59:28 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:06:45.852 06:59:28 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.EO3LGF/tests/target /tmp/spdk.EO3LGF 00:06:45.852 06:59:28 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:06:45.852 06:59:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:45.852 06:59:28 -- common/autotest_common.sh@318 -- # df -T 00:06:45.852 06:59:28 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=4194304 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=4194304 00:06:45.852 06:59:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:06:45.852 06:59:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=6266630144 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267887616 00:06:45.852 06:59:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:06:45.852 06:59:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=2494353408 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=2507157504 00:06:45.852 06:59:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=12804096 00:06:45.852 06:59:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=13792219136 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:06:45.852 06:59:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=5237436416 00:06:45.852 06:59:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda2 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=843546624 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1012768768 00:06:45.852 06:59:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=100016128 00:06:45.852 06:59:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=13792219136 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:06:45.852 06:59:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=5237436416 00:06:45.852 06:59:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267756544 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267891712 00:06:45.852 06:59:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=135168 00:06:45.852 06:59:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda3 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=92499968 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=104607744 00:06:45.852 06:59:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=12107776 00:06:45.852 06:59:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253572608 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253576704 00:06:45.852 06:59:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:06:45.852 06:59:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:06:45.852 06:59:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=97364086784 00:06:45.852 06:59:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:06:45.852 06:59:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=2338693120 00:06:45.852 06:59:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:45.852 06:59:28 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:06:45.852 * Looking for test storage... 00:06:45.853 06:59:28 -- common/autotest_common.sh@359 -- # local target_space new_size 00:06:45.853 06:59:28 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:06:45.853 06:59:28 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:45.853 06:59:28 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:45.853 06:59:28 -- common/autotest_common.sh@363 -- # mount=/home 00:06:45.853 06:59:28 -- common/autotest_common.sh@365 -- # target_space=13792219136 00:06:45.853 06:59:28 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:06:45.853 06:59:28 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:06:45.853 06:59:28 -- common/autotest_common.sh@371 -- # [[ btrfs == tmpfs ]] 00:06:45.853 06:59:28 -- common/autotest_common.sh@371 -- # [[ btrfs == ramfs ]] 00:06:45.853 06:59:28 -- common/autotest_common.sh@371 -- # [[ /home == / ]] 00:06:45.853 06:59:28 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:45.853 06:59:28 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:45.853 06:59:28 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:45.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:45.853 06:59:28 -- common/autotest_common.sh@380 -- # return 0 00:06:45.853 06:59:28 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:06:45.853 06:59:28 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:06:45.853 06:59:28 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:45.853 06:59:28 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:45.853 06:59:28 -- common/autotest_common.sh@1672 -- # true 00:06:45.853 06:59:28 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:06:45.853 06:59:28 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:45.853 06:59:28 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:45.853 06:59:28 -- common/autotest_common.sh@27 -- # exec 00:06:45.853 06:59:28 -- common/autotest_common.sh@29 -- # exec 00:06:45.853 06:59:28 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:45.853 06:59:28 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:45.853 06:59:28 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:45.853 06:59:28 -- common/autotest_common.sh@18 -- # set -x 00:06:45.853 06:59:28 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:45.853 06:59:28 -- nvmf/common.sh@7 -- # uname -s 00:06:45.853 06:59:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.853 06:59:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.853 06:59:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.853 06:59:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.853 06:59:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.853 06:59:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.853 06:59:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.853 06:59:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.853 06:59:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.853 06:59:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.853 06:59:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:06:45.853 06:59:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:06:45.853 06:59:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.853 06:59:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.853 06:59:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:45.853 06:59:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.853 06:59:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.853 06:59:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.853 06:59:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.853 06:59:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.853 06:59:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.853 06:59:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.853 06:59:28 -- paths/export.sh@5 -- # export PATH 00:06:45.853 06:59:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.853 06:59:28 -- nvmf/common.sh@46 -- # : 0 00:06:45.853 06:59:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:45.853 06:59:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:45.853 06:59:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:45.853 06:59:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.853 06:59:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.853 06:59:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:45.853 06:59:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:45.853 06:59:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:45.853 06:59:28 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:45.853 06:59:28 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:45.853 06:59:28 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:45.853 06:59:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:45.853 06:59:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.853 06:59:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:45.853 06:59:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:45.853 06:59:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:45.853 06:59:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.853 06:59:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.853 06:59:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.853 06:59:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:45.853 06:59:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:45.853 06:59:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:45.853 06:59:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:45.853 06:59:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:45.853 06:59:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:45.853 06:59:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.853 06:59:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.853 06:59:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:45.853 06:59:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:45.853 06:59:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:45.853 06:59:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:45.853 06:59:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:45.853 06:59:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.853 06:59:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:45.853 06:59:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:45.853 06:59:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:45.853 06:59:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:45.853 06:59:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:45.853 06:59:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:45.853 Cannot find device "nvmf_tgt_br" 00:06:45.853 06:59:28 -- nvmf/common.sh@154 -- # true 00:06:45.853 06:59:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:45.853 Cannot find device "nvmf_tgt_br2" 00:06:45.853 06:59:28 -- nvmf/common.sh@155 -- # true 00:06:45.853 06:59:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:45.853 06:59:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:45.853 Cannot find device "nvmf_tgt_br" 00:06:45.853 06:59:28 -- nvmf/common.sh@157 -- # true 00:06:45.853 06:59:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:45.853 Cannot find device "nvmf_tgt_br2" 00:06:45.853 06:59:28 -- nvmf/common.sh@158 -- # true 00:06:45.853 06:59:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:45.853 06:59:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:45.853 06:59:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:45.853 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:45.854 06:59:28 -- nvmf/common.sh@161 -- # true 00:06:45.854 06:59:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:45.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:45.854 06:59:28 -- nvmf/common.sh@162 -- # true 00:06:45.854 06:59:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:45.854 06:59:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:45.854 06:59:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:45.854 06:59:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:45.854 06:59:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:45.854 06:59:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:45.854 06:59:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:45.854 06:59:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:45.854 06:59:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:45.854 06:59:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:45.854 06:59:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:45.854 06:59:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:45.854 06:59:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:45.854 06:59:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:45.854 06:59:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:45.854 06:59:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:45.854 06:59:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:45.854 06:59:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:45.854 06:59:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:45.854 06:59:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:45.854 06:59:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:45.854 06:59:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:45.854 06:59:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:45.854 06:59:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:45.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:06:45.854 00:06:45.854 --- 10.0.0.2 ping statistics --- 00:06:45.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.854 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:06:45.854 06:59:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:45.854 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:45.854 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:06:45.854 00:06:45.854 --- 10.0.0.3 ping statistics --- 00:06:45.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.854 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:06:45.854 06:59:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:45.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:06:45.854 00:06:45.854 --- 10.0.0.1 ping statistics --- 00:06:45.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.854 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:06:45.854 06:59:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.854 06:59:28 -- nvmf/common.sh@421 -- # return 0 00:06:45.854 06:59:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:45.854 06:59:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.854 06:59:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:45.854 06:59:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:45.854 06:59:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.854 06:59:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:45.854 06:59:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:45.854 06:59:28 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:45.854 06:59:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:45.854 06:59:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.854 06:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:45.854 ************************************ 00:06:45.854 START TEST nvmf_filesystem_no_in_capsule 00:06:45.854 ************************************ 00:06:45.854 06:59:28 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:06:45.854 06:59:28 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:45.854 06:59:28 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:45.854 06:59:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:45.854 06:59:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:45.854 06:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:45.854 06:59:28 -- nvmf/common.sh@469 -- # nvmfpid=60380 00:06:45.854 06:59:28 -- nvmf/common.sh@470 -- # waitforlisten 60380 00:06:45.854 06:59:28 -- common/autotest_common.sh@819 -- # '[' -z 60380 ']' 00:06:45.854 06:59:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.854 06:59:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.854 06:59:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:45.854 06:59:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.854 06:59:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.854 06:59:28 -- common/autotest_common.sh@10 -- # set +x 00:06:45.854 [2024-07-11 06:59:28.883268] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:45.854 [2024-07-11 06:59:28.883348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.854 [2024-07-11 06:59:29.025703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.854 [2024-07-11 06:59:29.127280] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.854 [2024-07-11 06:59:29.127675] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.854 [2024-07-11 06:59:29.127791] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.854 [2024-07-11 06:59:29.127966] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.854 [2024-07-11 06:59:29.128249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.854 [2024-07-11 06:59:29.128394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.854 [2024-07-11 06:59:29.128495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.854 [2024-07-11 06:59:29.128496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.854 06:59:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:45.854 06:59:29 -- common/autotest_common.sh@852 -- # return 0 00:06:45.854 06:59:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:45.854 06:59:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:45.854 06:59:29 -- common/autotest_common.sh@10 -- # set +x 00:06:46.114 06:59:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.114 06:59:29 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:46.114 06:59:29 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:46.114 06:59:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.114 06:59:29 -- common/autotest_common.sh@10 -- # set +x 00:06:46.114 [2024-07-11 06:59:29.918099] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.114 06:59:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.114 06:59:29 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:46.114 06:59:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.114 06:59:29 -- common/autotest_common.sh@10 -- # set +x 00:06:46.114 Malloc1 00:06:46.114 06:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.114 06:59:30 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:46.114 06:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.114 06:59:30 -- common/autotest_common.sh@10 -- # set +x 00:06:46.114 06:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.114 06:59:30 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:46.114 06:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.114 06:59:30 -- common/autotest_common.sh@10 -- # set +x 00:06:46.114 06:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.114 06:59:30 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:46.114 06:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.114 06:59:30 -- common/autotest_common.sh@10 -- # set +x 00:06:46.114 [2024-07-11 06:59:30.115305] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.114 06:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.114 06:59:30 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:46.114 06:59:30 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:46.114 06:59:30 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:46.114 06:59:30 -- common/autotest_common.sh@1359 -- # local bs 00:06:46.114 06:59:30 -- common/autotest_common.sh@1360 -- # local nb 00:06:46.114 06:59:30 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:46.114 06:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.114 06:59:30 -- common/autotest_common.sh@10 -- # set +x 00:06:46.114 06:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.114 06:59:30 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:46.114 { 00:06:46.114 "aliases": [ 00:06:46.114 "a4b51a4f-7ac0-48ca-8fa8-d80939f71ac6" 00:06:46.114 ], 00:06:46.114 "assigned_rate_limits": { 00:06:46.114 "r_mbytes_per_sec": 0, 00:06:46.114 "rw_ios_per_sec": 0, 00:06:46.114 "rw_mbytes_per_sec": 0, 00:06:46.114 "w_mbytes_per_sec": 0 00:06:46.114 }, 00:06:46.114 "block_size": 512, 00:06:46.114 "claim_type": "exclusive_write", 00:06:46.114 "claimed": true, 00:06:46.114 "driver_specific": {}, 00:06:46.114 "memory_domains": [ 00:06:46.114 { 00:06:46.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.114 "dma_device_type": 2 00:06:46.114 } 00:06:46.114 ], 00:06:46.114 "name": "Malloc1", 00:06:46.114 "num_blocks": 1048576, 00:06:46.114 "product_name": "Malloc disk", 00:06:46.114 "supported_io_types": { 00:06:46.114 "abort": true, 00:06:46.114 "compare": false, 00:06:46.114 "compare_and_write": false, 00:06:46.114 "flush": true, 00:06:46.114 "nvme_admin": false, 00:06:46.114 "nvme_io": false, 00:06:46.114 "read": true, 00:06:46.114 "reset": true, 00:06:46.114 "unmap": true, 00:06:46.114 "write": true, 00:06:46.114 "write_zeroes": true 00:06:46.114 }, 00:06:46.114 "uuid": "a4b51a4f-7ac0-48ca-8fa8-d80939f71ac6", 00:06:46.114 "zoned": false 00:06:46.114 } 00:06:46.114 ]' 00:06:46.114 06:59:30 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:46.371 06:59:30 -- common/autotest_common.sh@1362 -- # bs=512 00:06:46.371 06:59:30 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:46.371 06:59:30 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:46.371 06:59:30 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:46.371 06:59:30 -- common/autotest_common.sh@1367 -- # echo 512 00:06:46.371 06:59:30 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:46.371 06:59:30 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:46.629 06:59:30 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:46.629 06:59:30 -- common/autotest_common.sh@1177 -- # local i=0 00:06:46.629 06:59:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:46.629 06:59:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:46.629 06:59:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:48.526 06:59:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:48.526 06:59:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:48.526 06:59:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:48.526 06:59:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:48.526 06:59:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:48.526 06:59:32 -- common/autotest_common.sh@1187 -- # return 0 00:06:48.526 06:59:32 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:48.526 06:59:32 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:48.526 06:59:32 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:48.526 06:59:32 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:48.526 06:59:32 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:48.526 06:59:32 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:48.526 06:59:32 -- setup/common.sh@80 -- # echo 536870912 00:06:48.526 06:59:32 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:48.526 06:59:32 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:48.526 06:59:32 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:48.526 06:59:32 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:48.526 06:59:32 -- target/filesystem.sh@69 -- # partprobe 00:06:48.784 06:59:32 -- target/filesystem.sh@70 -- # sleep 1 00:06:49.716 06:59:33 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:49.716 06:59:33 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:49.716 06:59:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:49.716 06:59:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.716 06:59:33 -- common/autotest_common.sh@10 -- # set +x 00:06:49.716 ************************************ 00:06:49.716 START TEST filesystem_ext4 00:06:49.716 ************************************ 00:06:49.716 06:59:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:49.716 06:59:33 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:49.716 06:59:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:49.716 06:59:33 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:49.716 06:59:33 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:49.716 06:59:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:49.716 06:59:33 -- common/autotest_common.sh@904 -- # local i=0 00:06:49.716 06:59:33 -- common/autotest_common.sh@905 -- # local force 00:06:49.716 06:59:33 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:49.716 06:59:33 -- common/autotest_common.sh@908 -- # force=-F 00:06:49.716 06:59:33 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:49.716 mke2fs 1.46.5 (30-Dec-2021) 00:06:49.716 Discarding device blocks: 0/522240 done 00:06:49.716 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:49.716 Filesystem UUID: 14bd59b8-5ac4-47b2-8d51-abce0ad104e3 00:06:49.716 Superblock backups stored on blocks: 00:06:49.716 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:49.716 00:06:49.716 Allocating group tables: 0/64 done 00:06:49.716 Writing inode tables: 0/64 done 00:06:49.974 Creating journal (8192 blocks): done 00:06:49.974 Writing superblocks and filesystem accounting information: 0/64 done 00:06:49.974 00:06:49.974 06:59:33 -- common/autotest_common.sh@921 -- # return 0 00:06:49.974 06:59:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:49.974 06:59:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:49.974 06:59:33 -- target/filesystem.sh@25 -- # sync 00:06:49.974 06:59:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:49.974 06:59:33 -- target/filesystem.sh@27 -- # sync 00:06:49.974 06:59:33 -- target/filesystem.sh@29 -- # i=0 00:06:49.974 06:59:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:49.974 06:59:33 -- target/filesystem.sh@37 -- # kill -0 60380 00:06:49.974 06:59:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:49.974 06:59:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:49.974 06:59:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:49.974 06:59:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:49.974 00:06:49.974 real 0m0.319s 00:06:49.974 user 0m0.019s 00:06:49.974 sys 0m0.058s 00:06:49.974 06:59:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.974 ************************************ 00:06:49.974 06:59:33 -- common/autotest_common.sh@10 -- # set +x 00:06:49.974 END TEST filesystem_ext4 00:06:49.974 ************************************ 00:06:49.974 06:59:34 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:49.974 06:59:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:49.974 06:59:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.974 06:59:34 -- common/autotest_common.sh@10 -- # set +x 00:06:49.974 ************************************ 00:06:49.974 START TEST filesystem_btrfs 00:06:49.974 ************************************ 00:06:49.974 06:59:34 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:49.974 06:59:34 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:49.974 06:59:34 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:49.974 06:59:34 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:49.974 06:59:34 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:49.974 06:59:34 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:49.974 06:59:34 -- common/autotest_common.sh@904 -- # local i=0 00:06:49.974 06:59:34 -- common/autotest_common.sh@905 -- # local force 00:06:49.974 06:59:34 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:49.974 06:59:34 -- common/autotest_common.sh@910 -- # force=-f 00:06:49.974 06:59:34 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:50.232 btrfs-progs v6.6.2 00:06:50.232 See https://btrfs.readthedocs.io for more information. 00:06:50.232 00:06:50.232 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:50.232 NOTE: several default settings have changed in version 5.15, please make sure 00:06:50.232 this does not affect your deployments: 00:06:50.232 - DUP for metadata (-m dup) 00:06:50.232 - enabled no-holes (-O no-holes) 00:06:50.232 - enabled free-space-tree (-R free-space-tree) 00:06:50.232 00:06:50.232 Label: (null) 00:06:50.232 UUID: 9f93d162-33a9-4407-9193-6d069c9f0770 00:06:50.232 Node size: 16384 00:06:50.232 Sector size: 4096 00:06:50.232 Filesystem size: 510.00MiB 00:06:50.232 Block group profiles: 00:06:50.232 Data: single 8.00MiB 00:06:50.232 Metadata: DUP 32.00MiB 00:06:50.232 System: DUP 8.00MiB 00:06:50.232 SSD detected: yes 00:06:50.232 Zoned device: no 00:06:50.232 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:50.232 Runtime features: free-space-tree 00:06:50.232 Checksum: crc32c 00:06:50.232 Number of devices: 1 00:06:50.232 Devices: 00:06:50.232 ID SIZE PATH 00:06:50.232 1 510.00MiB /dev/nvme0n1p1 00:06:50.232 00:06:50.232 06:59:34 -- common/autotest_common.sh@921 -- # return 0 00:06:50.232 06:59:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:50.232 06:59:34 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:50.490 06:59:34 -- target/filesystem.sh@25 -- # sync 00:06:50.490 06:59:34 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:50.490 06:59:34 -- target/filesystem.sh@27 -- # sync 00:06:50.490 06:59:34 -- target/filesystem.sh@29 -- # i=0 00:06:50.490 06:59:34 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:50.490 06:59:34 -- target/filesystem.sh@37 -- # kill -0 60380 00:06:50.490 06:59:34 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:50.490 06:59:34 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:50.490 06:59:34 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:50.490 06:59:34 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:50.490 ************************************ 00:06:50.490 END TEST filesystem_btrfs 00:06:50.490 ************************************ 00:06:50.490 00:06:50.490 real 0m0.332s 00:06:50.490 user 0m0.024s 00:06:50.490 sys 0m0.070s 00:06:50.490 06:59:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.490 06:59:34 -- common/autotest_common.sh@10 -- # set +x 00:06:50.490 06:59:34 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:50.490 06:59:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:50.490 06:59:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.490 06:59:34 -- common/autotest_common.sh@10 -- # set +x 00:06:50.490 ************************************ 00:06:50.490 START TEST filesystem_xfs 00:06:50.490 ************************************ 00:06:50.490 06:59:34 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:06:50.490 06:59:34 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:50.490 06:59:34 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:50.490 06:59:34 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:50.490 06:59:34 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:06:50.490 06:59:34 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:50.490 06:59:34 -- common/autotest_common.sh@904 -- # local i=0 00:06:50.490 06:59:34 -- common/autotest_common.sh@905 -- # local force 00:06:50.490 06:59:34 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:06:50.490 06:59:34 -- common/autotest_common.sh@910 -- # force=-f 00:06:50.490 06:59:34 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:50.784 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:50.784 = sectsz=512 attr=2, projid32bit=1 00:06:50.784 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:50.784 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:50.784 data = bsize=4096 blocks=130560, imaxpct=25 00:06:50.784 = sunit=0 swidth=0 blks 00:06:50.784 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:50.784 log =internal log bsize=4096 blocks=16384, version=2 00:06:50.784 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:50.784 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:51.352 Discarding blocks...Done. 00:06:51.352 06:59:35 -- common/autotest_common.sh@921 -- # return 0 00:06:51.352 06:59:35 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:53.885 06:59:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:53.885 06:59:37 -- target/filesystem.sh@25 -- # sync 00:06:53.885 06:59:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:53.885 06:59:37 -- target/filesystem.sh@27 -- # sync 00:06:53.885 06:59:37 -- target/filesystem.sh@29 -- # i=0 00:06:53.885 06:59:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:53.885 06:59:37 -- target/filesystem.sh@37 -- # kill -0 60380 00:06:53.885 06:59:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:53.885 06:59:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:53.885 06:59:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:53.885 06:59:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:53.885 ************************************ 00:06:53.885 END TEST filesystem_xfs 00:06:53.885 ************************************ 00:06:53.885 00:06:53.885 real 0m3.264s 00:06:53.885 user 0m0.029s 00:06:53.885 sys 0m0.061s 00:06:53.885 06:59:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.885 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:06:53.885 06:59:37 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:53.885 06:59:37 -- target/filesystem.sh@93 -- # sync 00:06:53.885 06:59:37 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:53.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:53.885 06:59:37 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:53.885 06:59:37 -- common/autotest_common.sh@1198 -- # local i=0 00:06:53.885 06:59:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:06:53.885 06:59:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:53.885 06:59:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:53.885 06:59:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:53.885 06:59:37 -- common/autotest_common.sh@1210 -- # return 0 00:06:53.885 06:59:37 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:53.885 06:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:53.885 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:06:53.885 06:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:53.885 06:59:37 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:53.885 06:59:37 -- target/filesystem.sh@101 -- # killprocess 60380 00:06:53.885 06:59:37 -- common/autotest_common.sh@926 -- # '[' -z 60380 ']' 00:06:53.885 06:59:37 -- common/autotest_common.sh@930 -- # kill -0 60380 00:06:53.885 06:59:37 -- common/autotest_common.sh@931 -- # uname 00:06:53.885 06:59:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:53.885 06:59:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60380 00:06:53.885 killing process with pid 60380 00:06:53.885 06:59:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:53.885 06:59:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:53.885 06:59:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60380' 00:06:53.885 06:59:37 -- common/autotest_common.sh@945 -- # kill 60380 00:06:53.885 06:59:37 -- common/autotest_common.sh@950 -- # wait 60380 00:06:54.821 06:59:38 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:54.821 00:06:54.821 real 0m9.703s 00:06:54.821 user 0m36.945s 00:06:54.821 sys 0m1.401s 00:06:54.821 06:59:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.822 06:59:38 -- common/autotest_common.sh@10 -- # set +x 00:06:54.822 ************************************ 00:06:54.822 END TEST nvmf_filesystem_no_in_capsule 00:06:54.822 ************************************ 00:06:54.822 06:59:38 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:54.822 06:59:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:54.822 06:59:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.822 06:59:38 -- common/autotest_common.sh@10 -- # set +x 00:06:54.822 ************************************ 00:06:54.822 START TEST nvmf_filesystem_in_capsule 00:06:54.822 ************************************ 00:06:54.822 06:59:38 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:06:54.822 06:59:38 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:54.822 06:59:38 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:54.822 06:59:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:54.822 06:59:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:54.822 06:59:38 -- common/autotest_common.sh@10 -- # set +x 00:06:54.822 06:59:38 -- nvmf/common.sh@469 -- # nvmfpid=60698 00:06:54.822 06:59:38 -- nvmf/common.sh@470 -- # waitforlisten 60698 00:06:54.822 06:59:38 -- common/autotest_common.sh@819 -- # '[' -z 60698 ']' 00:06:54.822 06:59:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.822 06:59:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:54.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.822 06:59:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:54.822 06:59:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.822 06:59:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:54.822 06:59:38 -- common/autotest_common.sh@10 -- # set +x 00:06:54.822 [2024-07-11 06:59:38.641495] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:54.822 [2024-07-11 06:59:38.641824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.822 [2024-07-11 06:59:38.787711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.080 [2024-07-11 06:59:38.925548] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:55.080 [2024-07-11 06:59:38.925749] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.080 [2024-07-11 06:59:38.925766] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.080 [2024-07-11 06:59:38.925778] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.080 [2024-07-11 06:59:38.925920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.080 [2024-07-11 06:59:38.926355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.080 [2024-07-11 06:59:38.926484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.080 [2024-07-11 06:59:38.926490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.648 06:59:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:55.648 06:59:39 -- common/autotest_common.sh@852 -- # return 0 00:06:55.648 06:59:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:55.648 06:59:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:55.648 06:59:39 -- common/autotest_common.sh@10 -- # set +x 00:06:55.648 06:59:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.648 06:59:39 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:55.648 06:59:39 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:55.648 06:59:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.648 06:59:39 -- common/autotest_common.sh@10 -- # set +x 00:06:55.648 [2024-07-11 06:59:39.627769] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.648 06:59:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.648 06:59:39 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:55.648 06:59:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.648 06:59:39 -- common/autotest_common.sh@10 -- # set +x 00:06:55.907 Malloc1 00:06:55.907 06:59:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.907 06:59:39 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:55.907 06:59:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.907 06:59:39 -- common/autotest_common.sh@10 -- # set +x 00:06:55.907 06:59:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.907 06:59:39 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:55.907 06:59:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.907 06:59:39 -- common/autotest_common.sh@10 -- # set +x 00:06:55.907 06:59:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.907 06:59:39 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.907 06:59:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.907 06:59:39 -- common/autotest_common.sh@10 -- # set +x 00:06:55.907 [2024-07-11 06:59:39.885954] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.907 06:59:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.907 06:59:39 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:55.907 06:59:39 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:55.907 06:59:39 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:55.907 06:59:39 -- common/autotest_common.sh@1359 -- # local bs 00:06:55.907 06:59:39 -- common/autotest_common.sh@1360 -- # local nb 00:06:55.907 06:59:39 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:55.907 06:59:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.907 06:59:39 -- common/autotest_common.sh@10 -- # set +x 00:06:55.907 06:59:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.907 06:59:39 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:55.907 { 00:06:55.907 "aliases": [ 00:06:55.907 "c3b2b122-270d-44ad-aa50-ab38b35e41e6" 00:06:55.907 ], 00:06:55.907 "assigned_rate_limits": { 00:06:55.907 "r_mbytes_per_sec": 0, 00:06:55.907 "rw_ios_per_sec": 0, 00:06:55.907 "rw_mbytes_per_sec": 0, 00:06:55.907 "w_mbytes_per_sec": 0 00:06:55.907 }, 00:06:55.907 "block_size": 512, 00:06:55.907 "claim_type": "exclusive_write", 00:06:55.907 "claimed": true, 00:06:55.907 "driver_specific": {}, 00:06:55.907 "memory_domains": [ 00:06:55.907 { 00:06:55.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.907 "dma_device_type": 2 00:06:55.907 } 00:06:55.907 ], 00:06:55.907 "name": "Malloc1", 00:06:55.907 "num_blocks": 1048576, 00:06:55.907 "product_name": "Malloc disk", 00:06:55.907 "supported_io_types": { 00:06:55.907 "abort": true, 00:06:55.907 "compare": false, 00:06:55.907 "compare_and_write": false, 00:06:55.907 "flush": true, 00:06:55.907 "nvme_admin": false, 00:06:55.907 "nvme_io": false, 00:06:55.907 "read": true, 00:06:55.907 "reset": true, 00:06:55.907 "unmap": true, 00:06:55.907 "write": true, 00:06:55.907 "write_zeroes": true 00:06:55.907 }, 00:06:55.907 "uuid": "c3b2b122-270d-44ad-aa50-ab38b35e41e6", 00:06:55.907 "zoned": false 00:06:55.907 } 00:06:55.907 ]' 00:06:55.907 06:59:39 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:56.165 06:59:39 -- common/autotest_common.sh@1362 -- # bs=512 00:06:56.165 06:59:39 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:56.165 06:59:40 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:56.165 06:59:40 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:56.165 06:59:40 -- common/autotest_common.sh@1367 -- # echo 512 00:06:56.165 06:59:40 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:56.165 06:59:40 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:56.165 06:59:40 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:56.165 06:59:40 -- common/autotest_common.sh@1177 -- # local i=0 00:06:56.165 06:59:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:56.165 06:59:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:56.165 06:59:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:58.688 06:59:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:58.688 06:59:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:58.688 06:59:42 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:58.688 06:59:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:58.688 06:59:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:58.688 06:59:42 -- common/autotest_common.sh@1187 -- # return 0 00:06:58.688 06:59:42 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:58.688 06:59:42 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:58.688 06:59:42 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:58.688 06:59:42 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:58.688 06:59:42 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:58.688 06:59:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:58.688 06:59:42 -- setup/common.sh@80 -- # echo 536870912 00:06:58.688 06:59:42 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:58.688 06:59:42 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:58.688 06:59:42 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:58.688 06:59:42 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:58.688 06:59:42 -- target/filesystem.sh@69 -- # partprobe 00:06:58.688 06:59:42 -- target/filesystem.sh@70 -- # sleep 1 00:06:59.624 06:59:43 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:59.624 06:59:43 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:59.624 06:59:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:59.624 06:59:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.624 06:59:43 -- common/autotest_common.sh@10 -- # set +x 00:06:59.624 ************************************ 00:06:59.624 START TEST filesystem_in_capsule_ext4 00:06:59.624 ************************************ 00:06:59.624 06:59:43 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:59.624 06:59:43 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:59.624 06:59:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:59.624 06:59:43 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:59.624 06:59:43 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:59.624 06:59:43 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:59.624 06:59:43 -- common/autotest_common.sh@904 -- # local i=0 00:06:59.624 06:59:43 -- common/autotest_common.sh@905 -- # local force 00:06:59.624 06:59:43 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:59.624 06:59:43 -- common/autotest_common.sh@908 -- # force=-F 00:06:59.624 06:59:43 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:59.624 mke2fs 1.46.5 (30-Dec-2021) 00:06:59.624 Discarding device blocks: 0/522240 done 00:06:59.624 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:59.624 Filesystem UUID: 050e0a41-a8d7-4548-b748-4f5d1c068d77 00:06:59.624 Superblock backups stored on blocks: 00:06:59.624 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:59.624 00:06:59.624 Allocating group tables: 0/64 done 00:06:59.624 Writing inode tables: 0/64 done 00:06:59.624 Creating journal (8192 blocks): done 00:06:59.624 Writing superblocks and filesystem accounting information: 0/64 done 00:06:59.624 00:06:59.624 06:59:43 -- common/autotest_common.sh@921 -- # return 0 00:06:59.624 06:59:43 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:59.624 06:59:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:59.881 06:59:43 -- target/filesystem.sh@25 -- # sync 00:06:59.881 06:59:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:59.881 06:59:43 -- target/filesystem.sh@27 -- # sync 00:06:59.881 06:59:43 -- target/filesystem.sh@29 -- # i=0 00:06:59.881 06:59:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:59.881 06:59:43 -- target/filesystem.sh@37 -- # kill -0 60698 00:06:59.881 06:59:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:59.881 06:59:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:59.881 06:59:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:59.881 06:59:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:59.881 ************************************ 00:06:59.881 END TEST filesystem_in_capsule_ext4 00:06:59.881 ************************************ 00:06:59.881 00:06:59.881 real 0m0.479s 00:06:59.881 user 0m0.019s 00:06:59.881 sys 0m0.058s 00:06:59.881 06:59:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.881 06:59:43 -- common/autotest_common.sh@10 -- # set +x 00:06:59.881 06:59:43 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:59.881 06:59:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:59.881 06:59:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.881 06:59:43 -- common/autotest_common.sh@10 -- # set +x 00:06:59.881 ************************************ 00:06:59.881 START TEST filesystem_in_capsule_btrfs 00:06:59.881 ************************************ 00:06:59.881 06:59:43 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:59.881 06:59:43 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:59.881 06:59:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:59.881 06:59:43 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:59.881 06:59:43 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:59.881 06:59:43 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:59.881 06:59:43 -- common/autotest_common.sh@904 -- # local i=0 00:06:59.881 06:59:43 -- common/autotest_common.sh@905 -- # local force 00:06:59.881 06:59:43 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:59.881 06:59:43 -- common/autotest_common.sh@910 -- # force=-f 00:06:59.881 06:59:43 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:00.140 btrfs-progs v6.6.2 00:07:00.140 See https://btrfs.readthedocs.io for more information. 00:07:00.140 00:07:00.140 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:00.140 NOTE: several default settings have changed in version 5.15, please make sure 00:07:00.140 this does not affect your deployments: 00:07:00.140 - DUP for metadata (-m dup) 00:07:00.140 - enabled no-holes (-O no-holes) 00:07:00.140 - enabled free-space-tree (-R free-space-tree) 00:07:00.140 00:07:00.140 Label: (null) 00:07:00.140 UUID: c1234f76-1b07-4abd-af23-44528a0e1e91 00:07:00.140 Node size: 16384 00:07:00.140 Sector size: 4096 00:07:00.140 Filesystem size: 510.00MiB 00:07:00.140 Block group profiles: 00:07:00.140 Data: single 8.00MiB 00:07:00.140 Metadata: DUP 32.00MiB 00:07:00.140 System: DUP 8.00MiB 00:07:00.140 SSD detected: yes 00:07:00.140 Zoned device: no 00:07:00.140 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:00.140 Runtime features: free-space-tree 00:07:00.140 Checksum: crc32c 00:07:00.140 Number of devices: 1 00:07:00.140 Devices: 00:07:00.140 ID SIZE PATH 00:07:00.140 1 510.00MiB /dev/nvme0n1p1 00:07:00.140 00:07:00.140 06:59:44 -- common/autotest_common.sh@921 -- # return 0 00:07:00.140 06:59:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:00.140 06:59:44 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:00.140 06:59:44 -- target/filesystem.sh@25 -- # sync 00:07:00.140 06:59:44 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:00.140 06:59:44 -- target/filesystem.sh@27 -- # sync 00:07:00.140 06:59:44 -- target/filesystem.sh@29 -- # i=0 00:07:00.140 06:59:44 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:00.140 06:59:44 -- target/filesystem.sh@37 -- # kill -0 60698 00:07:00.140 06:59:44 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:00.140 06:59:44 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:00.140 06:59:44 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:00.140 06:59:44 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:00.140 ************************************ 00:07:00.140 END TEST filesystem_in_capsule_btrfs 00:07:00.140 ************************************ 00:07:00.140 00:07:00.140 real 0m0.274s 00:07:00.140 user 0m0.024s 00:07:00.140 sys 0m0.065s 00:07:00.140 06:59:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.140 06:59:44 -- common/autotest_common.sh@10 -- # set +x 00:07:00.399 06:59:44 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:00.399 06:59:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:00.399 06:59:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.399 06:59:44 -- common/autotest_common.sh@10 -- # set +x 00:07:00.399 ************************************ 00:07:00.399 START TEST filesystem_in_capsule_xfs 00:07:00.399 ************************************ 00:07:00.399 06:59:44 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:00.399 06:59:44 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:00.399 06:59:44 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:00.399 06:59:44 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:00.399 06:59:44 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:00.399 06:59:44 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:00.399 06:59:44 -- common/autotest_common.sh@904 -- # local i=0 00:07:00.399 06:59:44 -- common/autotest_common.sh@905 -- # local force 00:07:00.399 06:59:44 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:00.399 06:59:44 -- common/autotest_common.sh@910 -- # force=-f 00:07:00.399 06:59:44 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:00.399 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:00.399 = sectsz=512 attr=2, projid32bit=1 00:07:00.399 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:00.399 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:00.399 data = bsize=4096 blocks=130560, imaxpct=25 00:07:00.399 = sunit=0 swidth=0 blks 00:07:00.400 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:00.400 log =internal log bsize=4096 blocks=16384, version=2 00:07:00.400 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:00.400 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:01.335 Discarding blocks...Done. 00:07:01.335 06:59:45 -- common/autotest_common.sh@921 -- # return 0 00:07:01.335 06:59:45 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:03.237 06:59:46 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:03.237 06:59:46 -- target/filesystem.sh@25 -- # sync 00:07:03.237 06:59:46 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:03.237 06:59:46 -- target/filesystem.sh@27 -- # sync 00:07:03.237 06:59:46 -- target/filesystem.sh@29 -- # i=0 00:07:03.237 06:59:46 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:03.237 06:59:46 -- target/filesystem.sh@37 -- # kill -0 60698 00:07:03.237 06:59:46 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:03.237 06:59:46 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:03.237 06:59:46 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:03.237 06:59:46 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:03.237 ************************************ 00:07:03.237 END TEST filesystem_in_capsule_xfs 00:07:03.237 ************************************ 00:07:03.237 00:07:03.237 real 0m2.657s 00:07:03.237 user 0m0.025s 00:07:03.237 sys 0m0.057s 00:07:03.237 06:59:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.237 06:59:46 -- common/autotest_common.sh@10 -- # set +x 00:07:03.237 06:59:46 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:03.237 06:59:46 -- target/filesystem.sh@93 -- # sync 00:07:03.237 06:59:46 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:03.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.237 06:59:47 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:03.237 06:59:47 -- common/autotest_common.sh@1198 -- # local i=0 00:07:03.237 06:59:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:03.237 06:59:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.237 06:59:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:03.237 06:59:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.237 06:59:47 -- common/autotest_common.sh@1210 -- # return 0 00:07:03.237 06:59:47 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:03.237 06:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:03.237 06:59:47 -- common/autotest_common.sh@10 -- # set +x 00:07:03.237 06:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:03.237 06:59:47 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:03.237 06:59:47 -- target/filesystem.sh@101 -- # killprocess 60698 00:07:03.237 06:59:47 -- common/autotest_common.sh@926 -- # '[' -z 60698 ']' 00:07:03.237 06:59:47 -- common/autotest_common.sh@930 -- # kill -0 60698 00:07:03.237 06:59:47 -- common/autotest_common.sh@931 -- # uname 00:07:03.237 06:59:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:03.237 06:59:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60698 00:07:03.237 killing process with pid 60698 00:07:03.237 06:59:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:03.237 06:59:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:03.237 06:59:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60698' 00:07:03.237 06:59:47 -- common/autotest_common.sh@945 -- # kill 60698 00:07:03.237 06:59:47 -- common/autotest_common.sh@950 -- # wait 60698 00:07:03.804 06:59:47 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:03.804 00:07:03.804 real 0m9.107s 00:07:03.804 user 0m34.456s 00:07:03.804 sys 0m1.425s 00:07:03.804 06:59:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.804 ************************************ 00:07:03.804 END TEST nvmf_filesystem_in_capsule 00:07:03.804 ************************************ 00:07:03.804 06:59:47 -- common/autotest_common.sh@10 -- # set +x 00:07:03.804 06:59:47 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:03.804 06:59:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:03.804 06:59:47 -- nvmf/common.sh@116 -- # sync 00:07:03.804 06:59:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:03.804 06:59:47 -- nvmf/common.sh@119 -- # set +e 00:07:03.804 06:59:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:03.804 06:59:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:03.804 rmmod nvme_tcp 00:07:03.804 rmmod nvme_fabrics 00:07:03.804 rmmod nvme_keyring 00:07:03.804 06:59:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:03.804 06:59:47 -- nvmf/common.sh@123 -- # set -e 00:07:03.804 06:59:47 -- nvmf/common.sh@124 -- # return 0 00:07:03.804 06:59:47 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:03.804 06:59:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:03.804 06:59:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:03.804 06:59:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:03.804 06:59:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.804 06:59:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:03.804 06:59:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.804 06:59:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.804 06:59:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.804 06:59:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:03.804 00:07:03.804 real 0m19.604s 00:07:03.804 user 1m11.611s 00:07:03.804 sys 0m3.191s 00:07:03.804 06:59:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.804 ************************************ 00:07:03.804 END TEST nvmf_filesystem 00:07:03.804 ************************************ 00:07:03.804 06:59:47 -- common/autotest_common.sh@10 -- # set +x 00:07:04.063 06:59:47 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:04.063 06:59:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:04.063 06:59:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.063 06:59:47 -- common/autotest_common.sh@10 -- # set +x 00:07:04.063 ************************************ 00:07:04.063 START TEST nvmf_discovery 00:07:04.063 ************************************ 00:07:04.063 06:59:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:04.063 * Looking for test storage... 00:07:04.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:04.063 06:59:47 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.063 06:59:47 -- nvmf/common.sh@7 -- # uname -s 00:07:04.063 06:59:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.063 06:59:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.063 06:59:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.063 06:59:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.063 06:59:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.063 06:59:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.063 06:59:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.063 06:59:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.063 06:59:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.063 06:59:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.063 06:59:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:07:04.063 06:59:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:07:04.063 06:59:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.063 06:59:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.063 06:59:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:04.063 06:59:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.063 06:59:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.063 06:59:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.063 06:59:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.063 06:59:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.063 06:59:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.063 06:59:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.063 06:59:47 -- paths/export.sh@5 -- # export PATH 00:07:04.063 06:59:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.063 06:59:47 -- nvmf/common.sh@46 -- # : 0 00:07:04.063 06:59:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:04.063 06:59:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:04.063 06:59:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:04.063 06:59:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.063 06:59:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.063 06:59:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:04.063 06:59:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:04.063 06:59:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:04.063 06:59:47 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:04.063 06:59:47 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:04.063 06:59:47 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:04.063 06:59:47 -- target/discovery.sh@15 -- # hash nvme 00:07:04.063 06:59:47 -- target/discovery.sh@20 -- # nvmftestinit 00:07:04.063 06:59:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:04.063 06:59:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.063 06:59:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:04.063 06:59:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:04.063 06:59:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:04.063 06:59:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.063 06:59:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.063 06:59:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.063 06:59:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:04.063 06:59:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:04.063 06:59:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:04.063 06:59:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:04.063 06:59:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:04.063 06:59:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:04.063 06:59:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.063 06:59:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.063 06:59:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:04.063 06:59:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:04.063 06:59:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:04.063 06:59:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:04.063 06:59:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:04.063 06:59:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.063 06:59:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:04.063 06:59:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:04.063 06:59:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:04.063 06:59:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:04.063 06:59:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:04.063 06:59:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:04.063 Cannot find device "nvmf_tgt_br" 00:07:04.063 06:59:48 -- nvmf/common.sh@154 -- # true 00:07:04.063 06:59:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:04.063 Cannot find device "nvmf_tgt_br2" 00:07:04.063 06:59:48 -- nvmf/common.sh@155 -- # true 00:07:04.063 06:59:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:04.063 06:59:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:04.063 Cannot find device "nvmf_tgt_br" 00:07:04.063 06:59:48 -- nvmf/common.sh@157 -- # true 00:07:04.063 06:59:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:04.063 Cannot find device "nvmf_tgt_br2" 00:07:04.063 06:59:48 -- nvmf/common.sh@158 -- # true 00:07:04.063 06:59:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:04.063 06:59:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:04.063 06:59:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:04.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:04.063 06:59:48 -- nvmf/common.sh@161 -- # true 00:07:04.063 06:59:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:04.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:04.322 06:59:48 -- nvmf/common.sh@162 -- # true 00:07:04.322 06:59:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:04.322 06:59:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:04.322 06:59:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:04.322 06:59:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:04.322 06:59:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:04.322 06:59:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:04.322 06:59:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:04.322 06:59:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:04.322 06:59:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:04.322 06:59:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:04.322 06:59:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:04.322 06:59:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:04.322 06:59:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:04.322 06:59:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:04.322 06:59:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:04.322 06:59:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:04.322 06:59:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:04.322 06:59:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:04.322 06:59:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:04.322 06:59:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:04.322 06:59:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:04.322 06:59:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:04.322 06:59:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:04.322 06:59:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:04.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:07:04.322 00:07:04.322 --- 10.0.0.2 ping statistics --- 00:07:04.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.322 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:04.322 06:59:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:04.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:04.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:07:04.322 00:07:04.322 --- 10.0.0.3 ping statistics --- 00:07:04.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.322 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:04.322 06:59:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:04.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:04.322 00:07:04.322 --- 10.0.0.1 ping statistics --- 00:07:04.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.322 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:04.322 06:59:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.322 06:59:48 -- nvmf/common.sh@421 -- # return 0 00:07:04.322 06:59:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:04.322 06:59:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.322 06:59:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:04.322 06:59:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:04.322 06:59:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.323 06:59:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:04.323 06:59:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:04.323 06:59:48 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:04.323 06:59:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:04.323 06:59:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:04.323 06:59:48 -- common/autotest_common.sh@10 -- # set +x 00:07:04.323 06:59:48 -- nvmf/common.sh@469 -- # nvmfpid=61152 00:07:04.323 06:59:48 -- nvmf/common.sh@470 -- # waitforlisten 61152 00:07:04.323 06:59:48 -- common/autotest_common.sh@819 -- # '[' -z 61152 ']' 00:07:04.323 06:59:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.323 06:59:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:04.323 06:59:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:04.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.323 06:59:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.323 06:59:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:04.323 06:59:48 -- common/autotest_common.sh@10 -- # set +x 00:07:04.581 [2024-07-11 06:59:48.420562] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:04.581 [2024-07-11 06:59:48.420642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.581 [2024-07-11 06:59:48.556569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.840 [2024-07-11 06:59:48.643927] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:04.840 [2024-07-11 06:59:48.644080] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.840 [2024-07-11 06:59:48.644093] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.840 [2024-07-11 06:59:48.644102] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.840 [2024-07-11 06:59:48.644242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.840 [2024-07-11 06:59:48.644588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.840 [2024-07-11 06:59:48.644756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.840 [2024-07-11 06:59:48.644762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.409 06:59:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:05.409 06:59:49 -- common/autotest_common.sh@852 -- # return 0 00:07:05.409 06:59:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:05.409 06:59:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:05.409 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.409 06:59:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.409 06:59:49 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:05.409 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.409 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.409 [2024-07-11 06:59:49.369547] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.409 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.409 06:59:49 -- target/discovery.sh@26 -- # seq 1 4 00:07:05.409 06:59:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:05.409 06:59:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:05.409 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.409 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.409 Null1 00:07:05.409 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.409 06:59:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:05.409 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.409 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.409 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.409 06:59:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:05.409 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.409 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.409 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.409 06:59:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.409 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.409 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.409 [2024-07-11 06:59:49.444102] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.409 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.409 06:59:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:05.409 06:59:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:05.409 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.409 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.409 Null2 00:07:05.409 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.409 06:59:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:05.409 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.409 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.409 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.409 06:59:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:05.409 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.409 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.668 06:59:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:05.668 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.668 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.668 06:59:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:05.668 06:59:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:05.668 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.668 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 Null3 00:07:05.668 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.668 06:59:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:05.668 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.668 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.668 06:59:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:05.668 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.668 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.668 06:59:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:05.668 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.668 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.668 06:59:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:05.668 06:59:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:05.668 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.668 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 Null4 00:07:05.668 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.668 06:59:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:05.668 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.668 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.668 06:59:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:05.668 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.668 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.668 06:59:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:05.668 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.668 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.668 06:59:49 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.668 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.668 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.668 06:59:49 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:05.668 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.668 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.668 06:59:49 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -a 10.0.0.2 -s 4420 00:07:05.668 00:07:05.668 Discovery Log Number of Records 6, Generation counter 6 00:07:05.668 =====Discovery Log Entry 0====== 00:07:05.668 trtype: tcp 00:07:05.668 adrfam: ipv4 00:07:05.668 subtype: current discovery subsystem 00:07:05.668 treq: not required 00:07:05.668 portid: 0 00:07:05.668 trsvcid: 4420 00:07:05.668 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:05.668 traddr: 10.0.0.2 00:07:05.668 eflags: explicit discovery connections, duplicate discovery information 00:07:05.668 sectype: none 00:07:05.668 =====Discovery Log Entry 1====== 00:07:05.668 trtype: tcp 00:07:05.668 adrfam: ipv4 00:07:05.668 subtype: nvme subsystem 00:07:05.668 treq: not required 00:07:05.668 portid: 0 00:07:05.668 trsvcid: 4420 00:07:05.668 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:05.668 traddr: 10.0.0.2 00:07:05.668 eflags: none 00:07:05.668 sectype: none 00:07:05.668 =====Discovery Log Entry 2====== 00:07:05.668 trtype: tcp 00:07:05.668 adrfam: ipv4 00:07:05.668 subtype: nvme subsystem 00:07:05.668 treq: not required 00:07:05.668 portid: 0 00:07:05.668 trsvcid: 4420 00:07:05.668 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:05.668 traddr: 10.0.0.2 00:07:05.668 eflags: none 00:07:05.668 sectype: none 00:07:05.668 =====Discovery Log Entry 3====== 00:07:05.668 trtype: tcp 00:07:05.668 adrfam: ipv4 00:07:05.668 subtype: nvme subsystem 00:07:05.668 treq: not required 00:07:05.668 portid: 0 00:07:05.668 trsvcid: 4420 00:07:05.668 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:05.668 traddr: 10.0.0.2 00:07:05.668 eflags: none 00:07:05.668 sectype: none 00:07:05.668 =====Discovery Log Entry 4====== 00:07:05.668 trtype: tcp 00:07:05.668 adrfam: ipv4 00:07:05.668 subtype: nvme subsystem 00:07:05.668 treq: not required 00:07:05.668 portid: 0 00:07:05.668 trsvcid: 4420 00:07:05.668 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:05.668 traddr: 10.0.0.2 00:07:05.668 eflags: none 00:07:05.668 sectype: none 00:07:05.668 =====Discovery Log Entry 5====== 00:07:05.668 trtype: tcp 00:07:05.668 adrfam: ipv4 00:07:05.668 subtype: discovery subsystem referral 00:07:05.668 treq: not required 00:07:05.668 portid: 0 00:07:05.668 trsvcid: 4430 00:07:05.668 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:05.668 traddr: 10.0.0.2 00:07:05.668 eflags: none 00:07:05.668 sectype: none 00:07:05.668 Perform nvmf subsystem discovery via RPC 00:07:05.668 06:59:49 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:05.668 06:59:49 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:05.668 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.668 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.668 [2024-07-11 06:59:49.636189] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:05.668 [ 00:07:05.668 { 00:07:05.668 "allow_any_host": true, 00:07:05.668 "hosts": [], 00:07:05.668 "listen_addresses": [ 00:07:05.668 { 00:07:05.668 "adrfam": "IPv4", 00:07:05.668 "traddr": "10.0.0.2", 00:07:05.668 "transport": "TCP", 00:07:05.668 "trsvcid": "4420", 00:07:05.668 "trtype": "TCP" 00:07:05.668 } 00:07:05.668 ], 00:07:05.668 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:05.668 "subtype": "Discovery" 00:07:05.668 }, 00:07:05.668 { 00:07:05.668 "allow_any_host": true, 00:07:05.668 "hosts": [], 00:07:05.668 "listen_addresses": [ 00:07:05.668 { 00:07:05.668 "adrfam": "IPv4", 00:07:05.668 "traddr": "10.0.0.2", 00:07:05.668 "transport": "TCP", 00:07:05.668 "trsvcid": "4420", 00:07:05.668 "trtype": "TCP" 00:07:05.668 } 00:07:05.668 ], 00:07:05.668 "max_cntlid": 65519, 00:07:05.668 "max_namespaces": 32, 00:07:05.668 "min_cntlid": 1, 00:07:05.668 "model_number": "SPDK bdev Controller", 00:07:05.668 "namespaces": [ 00:07:05.668 { 00:07:05.668 "bdev_name": "Null1", 00:07:05.668 "name": "Null1", 00:07:05.668 "nguid": "590F09107CB74ECEBCC2618A4B46BC39", 00:07:05.668 "nsid": 1, 00:07:05.668 "uuid": "590f0910-7cb7-4ece-bcc2-618a4b46bc39" 00:07:05.668 } 00:07:05.668 ], 00:07:05.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:05.668 "serial_number": "SPDK00000000000001", 00:07:05.668 "subtype": "NVMe" 00:07:05.668 }, 00:07:05.668 { 00:07:05.668 "allow_any_host": true, 00:07:05.668 "hosts": [], 00:07:05.668 "listen_addresses": [ 00:07:05.668 { 00:07:05.668 "adrfam": "IPv4", 00:07:05.669 "traddr": "10.0.0.2", 00:07:05.669 "transport": "TCP", 00:07:05.669 "trsvcid": "4420", 00:07:05.669 "trtype": "TCP" 00:07:05.669 } 00:07:05.669 ], 00:07:05.669 "max_cntlid": 65519, 00:07:05.669 "max_namespaces": 32, 00:07:05.669 "min_cntlid": 1, 00:07:05.669 "model_number": "SPDK bdev Controller", 00:07:05.669 "namespaces": [ 00:07:05.669 { 00:07:05.669 "bdev_name": "Null2", 00:07:05.669 "name": "Null2", 00:07:05.669 "nguid": "BF43221D92FF4531875AA48743337EA4", 00:07:05.669 "nsid": 1, 00:07:05.669 "uuid": "bf43221d-92ff-4531-875a-a48743337ea4" 00:07:05.669 } 00:07:05.669 ], 00:07:05.669 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:05.669 "serial_number": "SPDK00000000000002", 00:07:05.669 "subtype": "NVMe" 00:07:05.669 }, 00:07:05.669 { 00:07:05.669 "allow_any_host": true, 00:07:05.669 "hosts": [], 00:07:05.669 "listen_addresses": [ 00:07:05.669 { 00:07:05.669 "adrfam": "IPv4", 00:07:05.669 "traddr": "10.0.0.2", 00:07:05.669 "transport": "TCP", 00:07:05.669 "trsvcid": "4420", 00:07:05.669 "trtype": "TCP" 00:07:05.669 } 00:07:05.669 ], 00:07:05.669 "max_cntlid": 65519, 00:07:05.669 "max_namespaces": 32, 00:07:05.669 "min_cntlid": 1, 00:07:05.669 "model_number": "SPDK bdev Controller", 00:07:05.669 "namespaces": [ 00:07:05.669 { 00:07:05.669 "bdev_name": "Null3", 00:07:05.669 "name": "Null3", 00:07:05.669 "nguid": "7C13F4E078E44B0E910B04890BC8B0D5", 00:07:05.669 "nsid": 1, 00:07:05.669 "uuid": "7c13f4e0-78e4-4b0e-910b-04890bc8b0d5" 00:07:05.669 } 00:07:05.669 ], 00:07:05.669 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:05.669 "serial_number": "SPDK00000000000003", 00:07:05.669 "subtype": "NVMe" 00:07:05.669 }, 00:07:05.669 { 00:07:05.669 "allow_any_host": true, 00:07:05.669 "hosts": [], 00:07:05.669 "listen_addresses": [ 00:07:05.669 { 00:07:05.669 "adrfam": "IPv4", 00:07:05.669 "traddr": "10.0.0.2", 00:07:05.669 "transport": "TCP", 00:07:05.669 "trsvcid": "4420", 00:07:05.669 "trtype": "TCP" 00:07:05.669 } 00:07:05.669 ], 00:07:05.669 "max_cntlid": 65519, 00:07:05.669 "max_namespaces": 32, 00:07:05.669 "min_cntlid": 1, 00:07:05.669 "model_number": "SPDK bdev Controller", 00:07:05.669 "namespaces": [ 00:07:05.669 { 00:07:05.669 "bdev_name": "Null4", 00:07:05.669 "name": "Null4", 00:07:05.669 "nguid": "3BE9E1C3D42C4DE498FF5E98C81D161C", 00:07:05.669 "nsid": 1, 00:07:05.669 "uuid": "3be9e1c3-d42c-4de4-98ff-5e98c81d161c" 00:07:05.669 } 00:07:05.669 ], 00:07:05.669 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:05.669 "serial_number": "SPDK00000000000004", 00:07:05.669 "subtype": "NVMe" 00:07:05.669 } 00:07:05.669 ] 00:07:05.669 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.669 06:59:49 -- target/discovery.sh@42 -- # seq 1 4 00:07:05.669 06:59:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:05.669 06:59:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.669 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.669 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.669 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.669 06:59:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:05.669 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.669 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.669 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.669 06:59:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:05.669 06:59:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:05.669 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.669 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.669 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.669 06:59:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:05.669 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.669 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.669 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.669 06:59:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:05.669 06:59:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:05.669 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.669 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.928 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.928 06:59:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:05.928 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.928 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.928 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.928 06:59:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:05.928 06:59:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:05.928 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.928 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.928 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.928 06:59:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:05.928 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.928 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.928 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.928 06:59:49 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:05.928 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.928 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.928 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.928 06:59:49 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:05.928 06:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.928 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.928 06:59:49 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:05.928 06:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.928 06:59:49 -- target/discovery.sh@49 -- # check_bdevs= 00:07:05.928 06:59:49 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:05.928 06:59:49 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:05.928 06:59:49 -- target/discovery.sh@57 -- # nvmftestfini 00:07:05.928 06:59:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:05.928 06:59:49 -- nvmf/common.sh@116 -- # sync 00:07:05.928 06:59:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:05.928 06:59:49 -- nvmf/common.sh@119 -- # set +e 00:07:05.928 06:59:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:05.928 06:59:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:05.928 rmmod nvme_tcp 00:07:05.928 rmmod nvme_fabrics 00:07:05.928 rmmod nvme_keyring 00:07:05.928 06:59:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:05.928 06:59:49 -- nvmf/common.sh@123 -- # set -e 00:07:05.928 06:59:49 -- nvmf/common.sh@124 -- # return 0 00:07:05.928 06:59:49 -- nvmf/common.sh@477 -- # '[' -n 61152 ']' 00:07:05.928 06:59:49 -- nvmf/common.sh@478 -- # killprocess 61152 00:07:05.928 06:59:49 -- common/autotest_common.sh@926 -- # '[' -z 61152 ']' 00:07:05.928 06:59:49 -- common/autotest_common.sh@930 -- # kill -0 61152 00:07:05.928 06:59:49 -- common/autotest_common.sh@931 -- # uname 00:07:05.928 06:59:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:05.928 06:59:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61152 00:07:05.928 killing process with pid 61152 00:07:05.928 06:59:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:05.928 06:59:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:05.928 06:59:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61152' 00:07:05.928 06:59:49 -- common/autotest_common.sh@945 -- # kill 61152 00:07:05.928 [2024-07-11 06:59:49.900822] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:05.928 06:59:49 -- common/autotest_common.sh@950 -- # wait 61152 00:07:06.185 06:59:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:06.185 06:59:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:06.185 06:59:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:06.185 06:59:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:06.185 06:59:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:06.185 06:59:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.185 06:59:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.185 06:59:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.443 06:59:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:06.443 ************************************ 00:07:06.443 END TEST nvmf_discovery 00:07:06.443 ************************************ 00:07:06.443 00:07:06.443 real 0m2.375s 00:07:06.443 user 0m6.418s 00:07:06.443 sys 0m0.612s 00:07:06.443 06:59:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.443 06:59:50 -- common/autotest_common.sh@10 -- # set +x 00:07:06.443 06:59:50 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:06.443 06:59:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:06.443 06:59:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.443 06:59:50 -- common/autotest_common.sh@10 -- # set +x 00:07:06.443 ************************************ 00:07:06.443 START TEST nvmf_referrals 00:07:06.443 ************************************ 00:07:06.443 06:59:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:06.443 * Looking for test storage... 00:07:06.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:06.443 06:59:50 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:06.443 06:59:50 -- nvmf/common.sh@7 -- # uname -s 00:07:06.443 06:59:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.443 06:59:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.443 06:59:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.443 06:59:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.443 06:59:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.443 06:59:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.443 06:59:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.443 06:59:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.444 06:59:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.444 06:59:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.444 06:59:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:07:06.444 06:59:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:07:06.444 06:59:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.444 06:59:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.444 06:59:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:06.444 06:59:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:06.444 06:59:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.444 06:59:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.444 06:59:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.444 06:59:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.444 06:59:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.444 06:59:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.444 06:59:50 -- paths/export.sh@5 -- # export PATH 00:07:06.444 06:59:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.444 06:59:50 -- nvmf/common.sh@46 -- # : 0 00:07:06.444 06:59:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:06.444 06:59:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:06.444 06:59:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:06.444 06:59:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.444 06:59:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.444 06:59:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:06.444 06:59:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:06.444 06:59:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:06.444 06:59:50 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:06.444 06:59:50 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:06.444 06:59:50 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:06.444 06:59:50 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:06.444 06:59:50 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:06.444 06:59:50 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:06.444 06:59:50 -- target/referrals.sh@37 -- # nvmftestinit 00:07:06.444 06:59:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:06.444 06:59:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.444 06:59:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:06.444 06:59:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:06.444 06:59:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:06.444 06:59:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.444 06:59:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.444 06:59:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.444 06:59:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:06.444 06:59:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:06.444 06:59:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:06.444 06:59:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:06.444 06:59:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:06.444 06:59:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:06.444 06:59:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.444 06:59:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.444 06:59:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:06.444 06:59:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:06.444 06:59:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:06.444 06:59:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:06.444 06:59:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:06.444 06:59:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.444 06:59:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:06.444 06:59:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:06.444 06:59:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:06.444 06:59:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:06.444 06:59:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:06.444 06:59:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:06.444 Cannot find device "nvmf_tgt_br" 00:07:06.444 06:59:50 -- nvmf/common.sh@154 -- # true 00:07:06.444 06:59:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:06.444 Cannot find device "nvmf_tgt_br2" 00:07:06.444 06:59:50 -- nvmf/common.sh@155 -- # true 00:07:06.444 06:59:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:06.444 06:59:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:06.444 Cannot find device "nvmf_tgt_br" 00:07:06.444 06:59:50 -- nvmf/common.sh@157 -- # true 00:07:06.444 06:59:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:06.444 Cannot find device "nvmf_tgt_br2" 00:07:06.444 06:59:50 -- nvmf/common.sh@158 -- # true 00:07:06.444 06:59:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:06.444 06:59:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:06.702 06:59:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:06.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:06.702 06:59:50 -- nvmf/common.sh@161 -- # true 00:07:06.702 06:59:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:06.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:06.702 06:59:50 -- nvmf/common.sh@162 -- # true 00:07:06.702 06:59:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:06.702 06:59:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:06.702 06:59:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:06.702 06:59:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:06.702 06:59:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:06.702 06:59:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:06.702 06:59:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:06.702 06:59:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:06.702 06:59:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:06.702 06:59:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:06.702 06:59:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:06.702 06:59:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:06.702 06:59:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:06.702 06:59:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:06.702 06:59:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:06.702 06:59:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:06.702 06:59:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:06.702 06:59:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:06.702 06:59:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:06.702 06:59:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:06.702 06:59:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:06.702 06:59:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:06.702 06:59:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:06.702 06:59:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:06.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:07:06.702 00:07:06.702 --- 10.0.0.2 ping statistics --- 00:07:06.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.702 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:07:06.702 06:59:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:06.702 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:06.702 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:07:06.702 00:07:06.702 --- 10.0.0.3 ping statistics --- 00:07:06.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.703 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:06.703 06:59:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:06.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:07:06.703 00:07:06.703 --- 10.0.0.1 ping statistics --- 00:07:06.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.703 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:06.703 06:59:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.703 06:59:50 -- nvmf/common.sh@421 -- # return 0 00:07:06.703 06:59:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:06.703 06:59:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.703 06:59:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:06.703 06:59:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:06.703 06:59:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.703 06:59:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:06.703 06:59:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:06.703 06:59:50 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:06.703 06:59:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:06.703 06:59:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:06.703 06:59:50 -- common/autotest_common.sh@10 -- # set +x 00:07:06.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.703 06:59:50 -- nvmf/common.sh@469 -- # nvmfpid=61382 00:07:06.703 06:59:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:06.703 06:59:50 -- nvmf/common.sh@470 -- # waitforlisten 61382 00:07:06.703 06:59:50 -- common/autotest_common.sh@819 -- # '[' -z 61382 ']' 00:07:06.703 06:59:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.703 06:59:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:06.703 06:59:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.703 06:59:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:06.703 06:59:50 -- common/autotest_common.sh@10 -- # set +x 00:07:06.960 [2024-07-11 06:59:50.824582] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:06.960 [2024-07-11 06:59:50.825484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.960 [2024-07-11 06:59:50.964855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.217 [2024-07-11 06:59:51.124097] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.217 [2024-07-11 06:59:51.124625] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.217 [2024-07-11 06:59:51.124789] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.217 [2024-07-11 06:59:51.125021] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.217 [2024-07-11 06:59:51.125230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.217 [2024-07-11 06:59:51.125297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.217 [2024-07-11 06:59:51.125362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.217 [2024-07-11 06:59:51.125364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.151 06:59:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:08.151 06:59:51 -- common/autotest_common.sh@852 -- # return 0 00:07:08.151 06:59:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:08.151 06:59:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:08.151 06:59:51 -- common/autotest_common.sh@10 -- # set +x 00:07:08.151 06:59:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.151 06:59:51 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:08.151 06:59:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.151 06:59:51 -- common/autotest_common.sh@10 -- # set +x 00:07:08.151 [2024-07-11 06:59:51.895946] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.151 06:59:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.151 06:59:51 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:08.151 06:59:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.151 06:59:51 -- common/autotest_common.sh@10 -- # set +x 00:07:08.151 [2024-07-11 06:59:51.917218] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:08.151 06:59:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.151 06:59:51 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:08.151 06:59:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.151 06:59:51 -- common/autotest_common.sh@10 -- # set +x 00:07:08.151 06:59:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.151 06:59:51 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:08.151 06:59:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.152 06:59:51 -- common/autotest_common.sh@10 -- # set +x 00:07:08.152 06:59:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.152 06:59:51 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:08.152 06:59:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.152 06:59:51 -- common/autotest_common.sh@10 -- # set +x 00:07:08.152 06:59:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.152 06:59:51 -- target/referrals.sh@48 -- # jq length 00:07:08.152 06:59:51 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:08.152 06:59:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.152 06:59:51 -- common/autotest_common.sh@10 -- # set +x 00:07:08.152 06:59:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.152 06:59:52 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:08.152 06:59:52 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:08.152 06:59:52 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:08.152 06:59:52 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:08.152 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.152 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:08.152 06:59:52 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:08.152 06:59:52 -- target/referrals.sh@21 -- # sort 00:07:08.152 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.152 06:59:52 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:08.152 06:59:52 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:08.152 06:59:52 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:08.152 06:59:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:08.152 06:59:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:08.152 06:59:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.152 06:59:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:08.152 06:59:52 -- target/referrals.sh@26 -- # sort 00:07:08.152 06:59:52 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:08.152 06:59:52 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:08.152 06:59:52 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:08.152 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.152 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:08.152 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.152 06:59:52 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:08.152 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.152 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:08.152 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.152 06:59:52 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:08.152 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.152 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:08.410 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.410 06:59:52 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:08.410 06:59:52 -- target/referrals.sh@56 -- # jq length 00:07:08.410 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.410 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:08.410 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.410 06:59:52 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:08.410 06:59:52 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:08.410 06:59:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:08.410 06:59:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:08.410 06:59:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.410 06:59:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:08.410 06:59:52 -- target/referrals.sh@26 -- # sort 00:07:08.410 06:59:52 -- target/referrals.sh@26 -- # echo 00:07:08.410 06:59:52 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:08.410 06:59:52 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:08.410 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.410 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:08.410 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.410 06:59:52 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:08.410 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.410 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:08.410 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.410 06:59:52 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:08.410 06:59:52 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:08.410 06:59:52 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:08.410 06:59:52 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:08.410 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.410 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:08.410 06:59:52 -- target/referrals.sh@21 -- # sort 00:07:08.410 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.410 06:59:52 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:08.410 06:59:52 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:08.410 06:59:52 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:08.410 06:59:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:08.410 06:59:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:08.410 06:59:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.410 06:59:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:08.410 06:59:52 -- target/referrals.sh@26 -- # sort 00:07:08.669 06:59:52 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:08.669 06:59:52 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:08.670 06:59:52 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:08.670 06:59:52 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:08.670 06:59:52 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:08.670 06:59:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.670 06:59:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:08.670 06:59:52 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:08.670 06:59:52 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:08.670 06:59:52 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:08.670 06:59:52 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:08.670 06:59:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.670 06:59:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:08.670 06:59:52 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:08.670 06:59:52 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:08.670 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.670 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:08.670 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.670 06:59:52 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:08.670 06:59:52 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:08.670 06:59:52 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:08.670 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.670 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:08.670 06:59:52 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:08.670 06:59:52 -- target/referrals.sh@21 -- # sort 00:07:08.670 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.670 06:59:52 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:08.670 06:59:52 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:08.670 06:59:52 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:08.670 06:59:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:08.670 06:59:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:08.670 06:59:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.670 06:59:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:08.670 06:59:52 -- target/referrals.sh@26 -- # sort 00:07:08.928 06:59:52 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:08.928 06:59:52 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:08.928 06:59:52 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:08.928 06:59:52 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:08.928 06:59:52 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:08.928 06:59:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.928 06:59:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:08.928 06:59:52 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:08.928 06:59:52 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:08.928 06:59:52 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:08.928 06:59:52 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:08.928 06:59:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.928 06:59:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:08.928 06:59:52 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:08.928 06:59:52 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:08.928 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.928 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:08.928 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.928 06:59:52 -- target/referrals.sh@82 -- # jq length 00:07:08.928 06:59:52 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:08.928 06:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.928 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:08.928 06:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.928 06:59:52 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:08.928 06:59:52 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:08.928 06:59:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:08.928 06:59:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:08.928 06:59:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:08.928 06:59:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:08.928 06:59:52 -- target/referrals.sh@26 -- # sort 00:07:09.188 06:59:53 -- target/referrals.sh@26 -- # echo 00:07:09.188 06:59:53 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:09.188 06:59:53 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:09.188 06:59:53 -- target/referrals.sh@86 -- # nvmftestfini 00:07:09.188 06:59:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:09.188 06:59:53 -- nvmf/common.sh@116 -- # sync 00:07:09.188 06:59:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:09.188 06:59:53 -- nvmf/common.sh@119 -- # set +e 00:07:09.188 06:59:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:09.188 06:59:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:09.188 rmmod nvme_tcp 00:07:09.188 rmmod nvme_fabrics 00:07:09.188 rmmod nvme_keyring 00:07:09.188 06:59:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:09.188 06:59:53 -- nvmf/common.sh@123 -- # set -e 00:07:09.188 06:59:53 -- nvmf/common.sh@124 -- # return 0 00:07:09.188 06:59:53 -- nvmf/common.sh@477 -- # '[' -n 61382 ']' 00:07:09.188 06:59:53 -- nvmf/common.sh@478 -- # killprocess 61382 00:07:09.188 06:59:53 -- common/autotest_common.sh@926 -- # '[' -z 61382 ']' 00:07:09.188 06:59:53 -- common/autotest_common.sh@930 -- # kill -0 61382 00:07:09.188 06:59:53 -- common/autotest_common.sh@931 -- # uname 00:07:09.188 06:59:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:09.188 06:59:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61382 00:07:09.188 06:59:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:09.188 06:59:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:09.188 killing process with pid 61382 00:07:09.188 06:59:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61382' 00:07:09.188 06:59:53 -- common/autotest_common.sh@945 -- # kill 61382 00:07:09.188 06:59:53 -- common/autotest_common.sh@950 -- # wait 61382 00:07:09.447 06:59:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:09.447 06:59:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:09.447 06:59:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:09.447 06:59:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:09.447 06:59:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:09.447 06:59:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.447 06:59:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.447 06:59:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.706 06:59:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:09.706 00:07:09.706 real 0m3.197s 00:07:09.706 user 0m10.386s 00:07:09.706 sys 0m0.837s 00:07:09.706 06:59:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.706 ************************************ 00:07:09.706 END TEST nvmf_referrals 00:07:09.706 ************************************ 00:07:09.706 06:59:53 -- common/autotest_common.sh@10 -- # set +x 00:07:09.706 06:59:53 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:09.706 06:59:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:09.706 06:59:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.706 06:59:53 -- common/autotest_common.sh@10 -- # set +x 00:07:09.706 ************************************ 00:07:09.706 START TEST nvmf_connect_disconnect 00:07:09.706 ************************************ 00:07:09.706 06:59:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:09.706 * Looking for test storage... 00:07:09.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:09.706 06:59:53 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:09.706 06:59:53 -- nvmf/common.sh@7 -- # uname -s 00:07:09.706 06:59:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.706 06:59:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.706 06:59:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.706 06:59:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.706 06:59:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.706 06:59:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.706 06:59:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.706 06:59:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.706 06:59:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.706 06:59:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.706 06:59:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:07:09.707 06:59:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:07:09.707 06:59:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.707 06:59:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.707 06:59:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:09.707 06:59:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.707 06:59:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.707 06:59:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.707 06:59:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.707 06:59:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.707 06:59:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.707 06:59:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.707 06:59:53 -- paths/export.sh@5 -- # export PATH 00:07:09.707 06:59:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.707 06:59:53 -- nvmf/common.sh@46 -- # : 0 00:07:09.707 06:59:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:09.707 06:59:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:09.707 06:59:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:09.707 06:59:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.707 06:59:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.707 06:59:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:09.707 06:59:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:09.707 06:59:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:09.707 06:59:53 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:09.707 06:59:53 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:09.707 06:59:53 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:09.707 06:59:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:09.707 06:59:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.707 06:59:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:09.707 06:59:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:09.707 06:59:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:09.707 06:59:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.707 06:59:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.707 06:59:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.707 06:59:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:09.707 06:59:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:09.707 06:59:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:09.707 06:59:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:09.707 06:59:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:09.707 06:59:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:09.707 06:59:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.707 06:59:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.707 06:59:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:09.707 06:59:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:09.707 06:59:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:09.707 06:59:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:09.707 06:59:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:09.707 06:59:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.707 06:59:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:09.707 06:59:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:09.707 06:59:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:09.707 06:59:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:09.707 06:59:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:09.707 06:59:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:09.707 Cannot find device "nvmf_tgt_br" 00:07:09.707 06:59:53 -- nvmf/common.sh@154 -- # true 00:07:09.707 06:59:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:09.707 Cannot find device "nvmf_tgt_br2" 00:07:09.707 06:59:53 -- nvmf/common.sh@155 -- # true 00:07:09.707 06:59:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:09.707 06:59:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:09.707 Cannot find device "nvmf_tgt_br" 00:07:09.707 06:59:53 -- nvmf/common.sh@157 -- # true 00:07:09.707 06:59:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:09.707 Cannot find device "nvmf_tgt_br2" 00:07:09.707 06:59:53 -- nvmf/common.sh@158 -- # true 00:07:09.707 06:59:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:09.707 06:59:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:09.963 06:59:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:09.963 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:09.963 06:59:53 -- nvmf/common.sh@161 -- # true 00:07:09.963 06:59:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:09.963 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:09.963 06:59:53 -- nvmf/common.sh@162 -- # true 00:07:09.963 06:59:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:09.963 06:59:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:09.963 06:59:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:09.963 06:59:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:09.963 06:59:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:09.963 06:59:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:09.963 06:59:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:09.963 06:59:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:09.963 06:59:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:09.963 06:59:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:09.963 06:59:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:09.963 06:59:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:09.963 06:59:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:09.963 06:59:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:09.963 06:59:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:09.963 06:59:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:09.963 06:59:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:09.963 06:59:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:09.963 06:59:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:09.963 06:59:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:09.963 06:59:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:09.963 06:59:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:09.963 06:59:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:09.963 06:59:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:09.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:07:09.963 00:07:09.963 --- 10.0.0.2 ping statistics --- 00:07:09.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.963 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:07:09.963 06:59:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:09.963 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:09.963 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:07:09.963 00:07:09.963 --- 10.0.0.3 ping statistics --- 00:07:09.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.963 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:09.963 06:59:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:09.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:07:09.963 00:07:09.963 --- 10.0.0.1 ping statistics --- 00:07:09.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.963 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:07:09.963 06:59:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.963 06:59:53 -- nvmf/common.sh@421 -- # return 0 00:07:09.963 06:59:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:09.963 06:59:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.963 06:59:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:09.963 06:59:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:09.963 06:59:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.963 06:59:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:09.963 06:59:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:09.963 06:59:54 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:09.963 06:59:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:09.963 06:59:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:09.963 06:59:54 -- common/autotest_common.sh@10 -- # set +x 00:07:10.220 06:59:54 -- nvmf/common.sh@469 -- # nvmfpid=61685 00:07:10.220 06:59:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:10.220 06:59:54 -- nvmf/common.sh@470 -- # waitforlisten 61685 00:07:10.220 06:59:54 -- common/autotest_common.sh@819 -- # '[' -z 61685 ']' 00:07:10.220 06:59:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.220 06:59:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:10.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.220 06:59:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.220 06:59:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:10.220 06:59:54 -- common/autotest_common.sh@10 -- # set +x 00:07:10.220 [2024-07-11 06:59:54.098769] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:10.220 [2024-07-11 06:59:54.098937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.220 [2024-07-11 06:59:54.245412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.477 [2024-07-11 06:59:54.399142] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.477 [2024-07-11 06:59:54.399318] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.477 [2024-07-11 06:59:54.399332] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.477 [2024-07-11 06:59:54.399342] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.477 [2024-07-11 06:59:54.399490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.477 [2024-07-11 06:59:54.399814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.477 [2024-07-11 06:59:54.399887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.477 [2024-07-11 06:59:54.399897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.407 06:59:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:11.407 06:59:55 -- common/autotest_common.sh@852 -- # return 0 00:07:11.407 06:59:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:11.407 06:59:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:11.407 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:07:11.407 06:59:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.407 06:59:55 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:11.407 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.407 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:07:11.407 [2024-07-11 06:59:55.155129] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.407 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.407 06:59:55 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:11.407 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.407 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:07:11.407 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.407 06:59:55 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:11.407 06:59:55 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:11.407 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.407 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:07:11.407 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.407 06:59:55 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:11.407 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.407 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:07:11.407 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.407 06:59:55 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.407 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.407 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:07:11.407 [2024-07-11 06:59:55.235158] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.407 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.407 06:59:55 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:11.407 06:59:55 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:11.407 06:59:55 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:11.407 06:59:55 -- target/connect_disconnect.sh@34 -- # set +x 00:07:13.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:15.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:18.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:40.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:49.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.205 07:03:39 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:55.205 07:03:39 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:55.205 07:03:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:55.205 07:03:39 -- nvmf/common.sh@116 -- # sync 00:10:55.205 07:03:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:55.205 07:03:39 -- nvmf/common.sh@119 -- # set +e 00:10:55.205 07:03:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:55.205 07:03:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:55.205 rmmod nvme_tcp 00:10:55.464 rmmod nvme_fabrics 00:10:55.464 rmmod nvme_keyring 00:10:55.464 07:03:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:55.464 07:03:39 -- nvmf/common.sh@123 -- # set -e 00:10:55.464 07:03:39 -- nvmf/common.sh@124 -- # return 0 00:10:55.464 07:03:39 -- nvmf/common.sh@477 -- # '[' -n 61685 ']' 00:10:55.464 07:03:39 -- nvmf/common.sh@478 -- # killprocess 61685 00:10:55.464 07:03:39 -- common/autotest_common.sh@926 -- # '[' -z 61685 ']' 00:10:55.464 07:03:39 -- common/autotest_common.sh@930 -- # kill -0 61685 00:10:55.464 07:03:39 -- common/autotest_common.sh@931 -- # uname 00:10:55.464 07:03:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:55.464 07:03:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61685 00:10:55.464 07:03:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:55.464 killing process with pid 61685 00:10:55.464 07:03:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:55.464 07:03:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61685' 00:10:55.464 07:03:39 -- common/autotest_common.sh@945 -- # kill 61685 00:10:55.464 07:03:39 -- common/autotest_common.sh@950 -- # wait 61685 00:10:55.723 07:03:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:55.723 07:03:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:55.723 07:03:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:55.723 07:03:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:55.723 07:03:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:55.723 07:03:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.723 07:03:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.723 07:03:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.723 07:03:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:55.723 00:10:55.723 real 3m46.081s 00:10:55.723 user 14m45.321s 00:10:55.723 sys 0m19.933s 00:10:55.723 07:03:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.723 ************************************ 00:10:55.723 END TEST nvmf_connect_disconnect 00:10:55.723 ************************************ 00:10:55.723 07:03:39 -- common/autotest_common.sh@10 -- # set +x 00:10:55.723 07:03:39 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:55.723 07:03:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:55.723 07:03:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:55.723 07:03:39 -- common/autotest_common.sh@10 -- # set +x 00:10:55.723 ************************************ 00:10:55.723 START TEST nvmf_multitarget 00:10:55.723 ************************************ 00:10:55.723 07:03:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:55.723 * Looking for test storage... 00:10:55.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:55.723 07:03:39 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:55.723 07:03:39 -- nvmf/common.sh@7 -- # uname -s 00:10:55.723 07:03:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.723 07:03:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.723 07:03:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.723 07:03:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.723 07:03:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.723 07:03:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.723 07:03:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.723 07:03:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.723 07:03:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.723 07:03:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.723 07:03:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:10:55.982 07:03:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:10:55.983 07:03:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.983 07:03:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.983 07:03:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:55.983 07:03:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:55.983 07:03:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.983 07:03:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.983 07:03:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.983 07:03:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.983 07:03:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.983 07:03:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.983 07:03:39 -- paths/export.sh@5 -- # export PATH 00:10:55.983 07:03:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.983 07:03:39 -- nvmf/common.sh@46 -- # : 0 00:10:55.983 07:03:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:55.983 07:03:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:55.983 07:03:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:55.983 07:03:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.983 07:03:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.983 07:03:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:55.983 07:03:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:55.983 07:03:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:55.983 07:03:39 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:55.983 07:03:39 -- target/multitarget.sh@15 -- # nvmftestinit 00:10:55.983 07:03:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:55.983 07:03:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.983 07:03:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:55.983 07:03:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:55.983 07:03:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:55.983 07:03:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.983 07:03:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.983 07:03:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.983 07:03:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:55.983 07:03:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:55.983 07:03:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:55.983 07:03:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:55.983 07:03:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:55.983 07:03:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:55.983 07:03:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.983 07:03:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.983 07:03:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:55.983 07:03:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:55.983 07:03:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:55.983 07:03:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:55.983 07:03:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:55.983 07:03:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.983 07:03:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:55.983 07:03:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:55.983 07:03:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:55.983 07:03:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:55.983 07:03:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:55.983 07:03:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:55.983 Cannot find device "nvmf_tgt_br" 00:10:55.983 07:03:39 -- nvmf/common.sh@154 -- # true 00:10:55.983 07:03:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:55.983 Cannot find device "nvmf_tgt_br2" 00:10:55.983 07:03:39 -- nvmf/common.sh@155 -- # true 00:10:55.983 07:03:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:55.983 07:03:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:55.983 Cannot find device "nvmf_tgt_br" 00:10:55.983 07:03:39 -- nvmf/common.sh@157 -- # true 00:10:55.983 07:03:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:55.983 Cannot find device "nvmf_tgt_br2" 00:10:55.983 07:03:39 -- nvmf/common.sh@158 -- # true 00:10:55.983 07:03:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:55.983 07:03:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:55.983 07:03:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:55.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.983 07:03:39 -- nvmf/common.sh@161 -- # true 00:10:55.983 07:03:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:55.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.983 07:03:39 -- nvmf/common.sh@162 -- # true 00:10:55.983 07:03:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:55.983 07:03:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:55.983 07:03:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:55.983 07:03:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:55.983 07:03:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:55.983 07:03:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:55.983 07:03:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:55.983 07:03:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:55.983 07:03:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:55.983 07:03:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:55.983 07:03:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:55.983 07:03:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:55.983 07:03:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:55.983 07:03:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:55.983 07:03:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:55.983 07:03:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:55.983 07:03:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:55.983 07:03:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:56.253 07:03:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:56.253 07:03:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:56.253 07:03:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:56.253 07:03:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:56.253 07:03:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:56.253 07:03:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:56.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:10:56.253 00:10:56.253 --- 10.0.0.2 ping statistics --- 00:10:56.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.253 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:56.253 07:03:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:56.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:56.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:10:56.253 00:10:56.253 --- 10.0.0.3 ping statistics --- 00:10:56.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.253 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:56.253 07:03:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:56.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:56.253 00:10:56.253 --- 10.0.0.1 ping statistics --- 00:10:56.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.253 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:56.253 07:03:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.253 07:03:40 -- nvmf/common.sh@421 -- # return 0 00:10:56.253 07:03:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:56.253 07:03:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.253 07:03:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:56.253 07:03:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:56.253 07:03:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.253 07:03:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:56.253 07:03:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:56.253 07:03:40 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:56.253 07:03:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:56.253 07:03:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:56.253 07:03:40 -- common/autotest_common.sh@10 -- # set +x 00:10:56.253 07:03:40 -- nvmf/common.sh@469 -- # nvmfpid=65462 00:10:56.253 07:03:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.253 07:03:40 -- nvmf/common.sh@470 -- # waitforlisten 65462 00:10:56.253 07:03:40 -- common/autotest_common.sh@819 -- # '[' -z 65462 ']' 00:10:56.253 07:03:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.253 07:03:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:56.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.253 07:03:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.253 07:03:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:56.253 07:03:40 -- common/autotest_common.sh@10 -- # set +x 00:10:56.253 [2024-07-11 07:03:40.196101] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:56.253 [2024-07-11 07:03:40.196157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.522 [2024-07-11 07:03:40.333327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.522 [2024-07-11 07:03:40.428762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:56.522 [2024-07-11 07:03:40.428914] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.522 [2024-07-11 07:03:40.428927] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.522 [2024-07-11 07:03:40.428935] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.522 [2024-07-11 07:03:40.429075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.522 [2024-07-11 07:03:40.429180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.522 [2024-07-11 07:03:40.429322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.522 [2024-07-11 07:03:40.429326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.088 07:03:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:57.088 07:03:41 -- common/autotest_common.sh@852 -- # return 0 00:10:57.088 07:03:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:57.088 07:03:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:57.088 07:03:41 -- common/autotest_common.sh@10 -- # set +x 00:10:57.088 07:03:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.088 07:03:41 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:57.088 07:03:41 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:57.088 07:03:41 -- target/multitarget.sh@21 -- # jq length 00:10:57.346 07:03:41 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:57.346 07:03:41 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:57.346 "nvmf_tgt_1" 00:10:57.346 07:03:41 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:57.605 "nvmf_tgt_2" 00:10:57.605 07:03:41 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:57.605 07:03:41 -- target/multitarget.sh@28 -- # jq length 00:10:57.605 07:03:41 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:57.605 07:03:41 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:57.863 true 00:10:57.863 07:03:41 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:57.863 true 00:10:57.863 07:03:41 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:57.863 07:03:41 -- target/multitarget.sh@35 -- # jq length 00:10:58.122 07:03:41 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:58.122 07:03:41 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:58.122 07:03:41 -- target/multitarget.sh@41 -- # nvmftestfini 00:10:58.122 07:03:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:58.122 07:03:41 -- nvmf/common.sh@116 -- # sync 00:10:58.122 07:03:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:58.122 07:03:42 -- nvmf/common.sh@119 -- # set +e 00:10:58.122 07:03:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:58.122 07:03:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:58.122 rmmod nvme_tcp 00:10:58.122 rmmod nvme_fabrics 00:10:58.122 rmmod nvme_keyring 00:10:58.122 07:03:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:58.122 07:03:42 -- nvmf/common.sh@123 -- # set -e 00:10:58.122 07:03:42 -- nvmf/common.sh@124 -- # return 0 00:10:58.122 07:03:42 -- nvmf/common.sh@477 -- # '[' -n 65462 ']' 00:10:58.122 07:03:42 -- nvmf/common.sh@478 -- # killprocess 65462 00:10:58.122 07:03:42 -- common/autotest_common.sh@926 -- # '[' -z 65462 ']' 00:10:58.122 07:03:42 -- common/autotest_common.sh@930 -- # kill -0 65462 00:10:58.122 07:03:42 -- common/autotest_common.sh@931 -- # uname 00:10:58.122 07:03:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:58.122 07:03:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65462 00:10:58.122 07:03:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:58.122 killing process with pid 65462 00:10:58.122 07:03:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:58.122 07:03:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65462' 00:10:58.122 07:03:42 -- common/autotest_common.sh@945 -- # kill 65462 00:10:58.122 07:03:42 -- common/autotest_common.sh@950 -- # wait 65462 00:10:58.380 07:03:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:58.380 07:03:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:58.380 07:03:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:58.380 07:03:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:58.380 07:03:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:58.380 07:03:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.380 07:03:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.380 07:03:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.380 07:03:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:58.380 ************************************ 00:10:58.381 END TEST nvmf_multitarget 00:10:58.381 ************************************ 00:10:58.381 00:10:58.381 real 0m2.687s 00:10:58.381 user 0m8.687s 00:10:58.381 sys 0m0.648s 00:10:58.381 07:03:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.381 07:03:42 -- common/autotest_common.sh@10 -- # set +x 00:10:58.381 07:03:42 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:58.381 07:03:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:58.381 07:03:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:58.381 07:03:42 -- common/autotest_common.sh@10 -- # set +x 00:10:58.381 ************************************ 00:10:58.381 START TEST nvmf_rpc 00:10:58.381 ************************************ 00:10:58.381 07:03:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:58.638 * Looking for test storage... 00:10:58.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:58.638 07:03:42 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:58.638 07:03:42 -- nvmf/common.sh@7 -- # uname -s 00:10:58.638 07:03:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.638 07:03:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.638 07:03:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.638 07:03:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.638 07:03:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.638 07:03:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.638 07:03:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.638 07:03:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.638 07:03:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.638 07:03:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.638 07:03:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:10:58.638 07:03:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:10:58.638 07:03:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.638 07:03:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.638 07:03:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:58.638 07:03:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:58.638 07:03:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.638 07:03:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.638 07:03:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.638 07:03:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.638 07:03:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.638 07:03:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.638 07:03:42 -- paths/export.sh@5 -- # export PATH 00:10:58.638 07:03:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.638 07:03:42 -- nvmf/common.sh@46 -- # : 0 00:10:58.638 07:03:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:58.638 07:03:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:58.638 07:03:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:58.638 07:03:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.638 07:03:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.638 07:03:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:58.638 07:03:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:58.638 07:03:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:58.638 07:03:42 -- target/rpc.sh@11 -- # loops=5 00:10:58.638 07:03:42 -- target/rpc.sh@23 -- # nvmftestinit 00:10:58.638 07:03:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:58.638 07:03:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.638 07:03:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:58.638 07:03:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:58.638 07:03:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:58.638 07:03:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.638 07:03:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.639 07:03:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.639 07:03:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:58.639 07:03:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:58.639 07:03:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:58.639 07:03:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:58.639 07:03:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:58.639 07:03:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:58.639 07:03:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.639 07:03:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.639 07:03:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:58.639 07:03:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:58.639 07:03:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:58.639 07:03:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:58.639 07:03:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:58.639 07:03:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.639 07:03:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:58.639 07:03:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:58.639 07:03:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:58.639 07:03:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:58.639 07:03:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:58.639 07:03:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:58.639 Cannot find device "nvmf_tgt_br" 00:10:58.639 07:03:42 -- nvmf/common.sh@154 -- # true 00:10:58.639 07:03:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:58.639 Cannot find device "nvmf_tgt_br2" 00:10:58.639 07:03:42 -- nvmf/common.sh@155 -- # true 00:10:58.639 07:03:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:58.639 07:03:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:58.639 Cannot find device "nvmf_tgt_br" 00:10:58.639 07:03:42 -- nvmf/common.sh@157 -- # true 00:10:58.639 07:03:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:58.639 Cannot find device "nvmf_tgt_br2" 00:10:58.639 07:03:42 -- nvmf/common.sh@158 -- # true 00:10:58.639 07:03:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:58.639 07:03:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:58.639 07:03:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:58.639 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:58.639 07:03:42 -- nvmf/common.sh@161 -- # true 00:10:58.639 07:03:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:58.639 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:58.639 07:03:42 -- nvmf/common.sh@162 -- # true 00:10:58.639 07:03:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:58.639 07:03:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:58.639 07:03:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:58.639 07:03:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:58.639 07:03:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:58.898 07:03:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:58.898 07:03:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:58.898 07:03:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:58.898 07:03:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:58.898 07:03:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:58.898 07:03:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:58.898 07:03:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:58.898 07:03:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:58.898 07:03:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:58.898 07:03:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:58.898 07:03:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:58.898 07:03:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:58.898 07:03:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:58.898 07:03:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:58.898 07:03:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:58.898 07:03:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:58.898 07:03:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:58.898 07:03:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:58.898 07:03:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:58.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:10:58.898 00:10:58.898 --- 10.0.0.2 ping statistics --- 00:10:58.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.898 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:58.898 07:03:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:58.898 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:58.898 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:10:58.898 00:10:58.898 --- 10.0.0.3 ping statistics --- 00:10:58.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.898 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:58.898 07:03:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:58.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:10:58.898 00:10:58.898 --- 10.0.0.1 ping statistics --- 00:10:58.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.898 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:10:58.898 07:03:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.898 07:03:42 -- nvmf/common.sh@421 -- # return 0 00:10:58.898 07:03:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:58.898 07:03:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.898 07:03:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:58.898 07:03:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:58.898 07:03:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.898 07:03:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:58.898 07:03:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:58.898 07:03:42 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:58.898 07:03:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:58.898 07:03:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:58.898 07:03:42 -- common/autotest_common.sh@10 -- # set +x 00:10:58.898 07:03:42 -- nvmf/common.sh@469 -- # nvmfpid=65684 00:10:58.898 07:03:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.898 07:03:42 -- nvmf/common.sh@470 -- # waitforlisten 65684 00:10:58.898 07:03:42 -- common/autotest_common.sh@819 -- # '[' -z 65684 ']' 00:10:58.898 07:03:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.898 07:03:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:58.898 07:03:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.898 07:03:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:58.898 07:03:42 -- common/autotest_common.sh@10 -- # set +x 00:10:58.898 [2024-07-11 07:03:42.946873] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:58.898 [2024-07-11 07:03:42.946955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.158 [2024-07-11 07:03:43.083300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.158 [2024-07-11 07:03:43.182604] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:59.158 [2024-07-11 07:03:43.182785] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.158 [2024-07-11 07:03:43.182802] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.158 [2024-07-11 07:03:43.182814] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.158 [2024-07-11 07:03:43.183016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.158 [2024-07-11 07:03:43.183129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.158 [2024-07-11 07:03:43.183270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.158 [2024-07-11 07:03:43.183281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.104 07:03:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:00.104 07:03:43 -- common/autotest_common.sh@852 -- # return 0 00:11:00.104 07:03:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:00.104 07:03:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:00.104 07:03:43 -- common/autotest_common.sh@10 -- # set +x 00:11:00.104 07:03:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.104 07:03:43 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:00.104 07:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:00.104 07:03:43 -- common/autotest_common.sh@10 -- # set +x 00:11:00.104 07:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:00.104 07:03:43 -- target/rpc.sh@26 -- # stats='{ 00:11:00.104 "poll_groups": [ 00:11:00.104 { 00:11:00.104 "admin_qpairs": 0, 00:11:00.104 "completed_nvme_io": 0, 00:11:00.104 "current_admin_qpairs": 0, 00:11:00.104 "current_io_qpairs": 0, 00:11:00.104 "io_qpairs": 0, 00:11:00.104 "name": "nvmf_tgt_poll_group_0", 00:11:00.104 "pending_bdev_io": 0, 00:11:00.104 "transports": [] 00:11:00.104 }, 00:11:00.104 { 00:11:00.104 "admin_qpairs": 0, 00:11:00.104 "completed_nvme_io": 0, 00:11:00.104 "current_admin_qpairs": 0, 00:11:00.104 "current_io_qpairs": 0, 00:11:00.104 "io_qpairs": 0, 00:11:00.104 "name": "nvmf_tgt_poll_group_1", 00:11:00.104 "pending_bdev_io": 0, 00:11:00.104 "transports": [] 00:11:00.104 }, 00:11:00.104 { 00:11:00.104 "admin_qpairs": 0, 00:11:00.104 "completed_nvme_io": 0, 00:11:00.104 "current_admin_qpairs": 0, 00:11:00.104 "current_io_qpairs": 0, 00:11:00.104 "io_qpairs": 0, 00:11:00.104 "name": "nvmf_tgt_poll_group_2", 00:11:00.104 "pending_bdev_io": 0, 00:11:00.104 "transports": [] 00:11:00.104 }, 00:11:00.104 { 00:11:00.104 "admin_qpairs": 0, 00:11:00.104 "completed_nvme_io": 0, 00:11:00.104 "current_admin_qpairs": 0, 00:11:00.104 "current_io_qpairs": 0, 00:11:00.104 "io_qpairs": 0, 00:11:00.104 "name": "nvmf_tgt_poll_group_3", 00:11:00.104 "pending_bdev_io": 0, 00:11:00.104 "transports": [] 00:11:00.104 } 00:11:00.104 ], 00:11:00.104 "tick_rate": 2200000000 00:11:00.104 }' 00:11:00.104 07:03:43 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:00.104 07:03:43 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:00.104 07:03:43 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:00.104 07:03:43 -- target/rpc.sh@15 -- # wc -l 00:11:00.104 07:03:43 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:00.104 07:03:43 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:00.104 07:03:44 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:00.104 07:03:44 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.104 07:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:00.104 07:03:44 -- common/autotest_common.sh@10 -- # set +x 00:11:00.104 [2024-07-11 07:03:44.028988] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.104 07:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:00.104 07:03:44 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:00.104 07:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:00.104 07:03:44 -- common/autotest_common.sh@10 -- # set +x 00:11:00.104 07:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:00.104 07:03:44 -- target/rpc.sh@33 -- # stats='{ 00:11:00.104 "poll_groups": [ 00:11:00.104 { 00:11:00.104 "admin_qpairs": 0, 00:11:00.104 "completed_nvme_io": 0, 00:11:00.104 "current_admin_qpairs": 0, 00:11:00.104 "current_io_qpairs": 0, 00:11:00.104 "io_qpairs": 0, 00:11:00.104 "name": "nvmf_tgt_poll_group_0", 00:11:00.104 "pending_bdev_io": 0, 00:11:00.104 "transports": [ 00:11:00.104 { 00:11:00.104 "trtype": "TCP" 00:11:00.104 } 00:11:00.104 ] 00:11:00.104 }, 00:11:00.104 { 00:11:00.104 "admin_qpairs": 0, 00:11:00.104 "completed_nvme_io": 0, 00:11:00.104 "current_admin_qpairs": 0, 00:11:00.104 "current_io_qpairs": 0, 00:11:00.104 "io_qpairs": 0, 00:11:00.104 "name": "nvmf_tgt_poll_group_1", 00:11:00.104 "pending_bdev_io": 0, 00:11:00.104 "transports": [ 00:11:00.104 { 00:11:00.105 "trtype": "TCP" 00:11:00.105 } 00:11:00.105 ] 00:11:00.105 }, 00:11:00.105 { 00:11:00.105 "admin_qpairs": 0, 00:11:00.105 "completed_nvme_io": 0, 00:11:00.105 "current_admin_qpairs": 0, 00:11:00.105 "current_io_qpairs": 0, 00:11:00.105 "io_qpairs": 0, 00:11:00.105 "name": "nvmf_tgt_poll_group_2", 00:11:00.105 "pending_bdev_io": 0, 00:11:00.105 "transports": [ 00:11:00.105 { 00:11:00.105 "trtype": "TCP" 00:11:00.105 } 00:11:00.105 ] 00:11:00.105 }, 00:11:00.105 { 00:11:00.105 "admin_qpairs": 0, 00:11:00.105 "completed_nvme_io": 0, 00:11:00.105 "current_admin_qpairs": 0, 00:11:00.105 "current_io_qpairs": 0, 00:11:00.105 "io_qpairs": 0, 00:11:00.105 "name": "nvmf_tgt_poll_group_3", 00:11:00.105 "pending_bdev_io": 0, 00:11:00.105 "transports": [ 00:11:00.105 { 00:11:00.105 "trtype": "TCP" 00:11:00.105 } 00:11:00.105 ] 00:11:00.105 } 00:11:00.105 ], 00:11:00.105 "tick_rate": 2200000000 00:11:00.105 }' 00:11:00.105 07:03:44 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:00.105 07:03:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:00.105 07:03:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:00.105 07:03:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:00.105 07:03:44 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:00.105 07:03:44 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:00.105 07:03:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:00.105 07:03:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:00.105 07:03:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:00.363 07:03:44 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:00.363 07:03:44 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:00.363 07:03:44 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:00.363 07:03:44 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:00.363 07:03:44 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:00.363 07:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:00.363 07:03:44 -- common/autotest_common.sh@10 -- # set +x 00:11:00.363 Malloc1 00:11:00.363 07:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:00.363 07:03:44 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.363 07:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:00.363 07:03:44 -- common/autotest_common.sh@10 -- # set +x 00:11:00.363 07:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:00.363 07:03:44 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:00.363 07:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:00.363 07:03:44 -- common/autotest_common.sh@10 -- # set +x 00:11:00.363 07:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:00.363 07:03:44 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:00.363 07:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:00.363 07:03:44 -- common/autotest_common.sh@10 -- # set +x 00:11:00.363 07:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:00.363 07:03:44 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.363 07:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:00.363 07:03:44 -- common/autotest_common.sh@10 -- # set +x 00:11:00.363 [2024-07-11 07:03:44.238765] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.363 07:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:00.363 07:03:44 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 -a 10.0.0.2 -s 4420 00:11:00.363 07:03:44 -- common/autotest_common.sh@640 -- # local es=0 00:11:00.364 07:03:44 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 -a 10.0.0.2 -s 4420 00:11:00.364 07:03:44 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:00.364 07:03:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:00.364 07:03:44 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:00.364 07:03:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:00.364 07:03:44 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:00.364 07:03:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:00.364 07:03:44 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:00.364 07:03:44 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:00.364 07:03:44 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 -a 10.0.0.2 -s 4420 00:11:00.364 [2024-07-11 07:03:44.267092] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77' 00:11:00.364 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:00.364 could not add new controller: failed to write to nvme-fabrics device 00:11:00.364 07:03:44 -- common/autotest_common.sh@643 -- # es=1 00:11:00.364 07:03:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:00.364 07:03:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:00.364 07:03:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:00.364 07:03:44 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:11:00.364 07:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:00.364 07:03:44 -- common/autotest_common.sh@10 -- # set +x 00:11:00.364 07:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:00.364 07:03:44 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:00.622 07:03:44 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.622 07:03:44 -- common/autotest_common.sh@1177 -- # local i=0 00:11:00.622 07:03:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.622 07:03:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:00.622 07:03:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:02.524 07:03:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:02.525 07:03:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:02.525 07:03:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.525 07:03:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:02.525 07:03:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.525 07:03:46 -- common/autotest_common.sh@1187 -- # return 0 00:11:02.525 07:03:46 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.525 07:03:46 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.525 07:03:46 -- common/autotest_common.sh@1198 -- # local i=0 00:11:02.525 07:03:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:02.525 07:03:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.525 07:03:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:02.525 07:03:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.525 07:03:46 -- common/autotest_common.sh@1210 -- # return 0 00:11:02.525 07:03:46 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:11:02.525 07:03:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.525 07:03:46 -- common/autotest_common.sh@10 -- # set +x 00:11:02.525 07:03:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.525 07:03:46 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.525 07:03:46 -- common/autotest_common.sh@640 -- # local es=0 00:11:02.525 07:03:46 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.525 07:03:46 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:02.525 07:03:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:02.525 07:03:46 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:02.525 07:03:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:02.525 07:03:46 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:02.525 07:03:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:02.525 07:03:46 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:02.525 07:03:46 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:02.525 07:03:46 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.783 [2024-07-11 07:03:46.588601] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77' 00:11:02.783 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:02.783 could not add new controller: failed to write to nvme-fabrics device 00:11:02.783 07:03:46 -- common/autotest_common.sh@643 -- # es=1 00:11:02.783 07:03:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:02.783 07:03:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:02.783 07:03:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:02.783 07:03:46 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:02.783 07:03:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.783 07:03:46 -- common/autotest_common.sh@10 -- # set +x 00:11:02.783 07:03:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.783 07:03:46 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.783 07:03:46 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:02.783 07:03:46 -- common/autotest_common.sh@1177 -- # local i=0 00:11:02.783 07:03:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.783 07:03:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:02.783 07:03:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:05.316 07:03:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:05.316 07:03:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:05.316 07:03:48 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.316 07:03:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:05.316 07:03:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.316 07:03:48 -- common/autotest_common.sh@1187 -- # return 0 00:11:05.316 07:03:48 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.316 07:03:48 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.316 07:03:48 -- common/autotest_common.sh@1198 -- # local i=0 00:11:05.316 07:03:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:05.316 07:03:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.316 07:03:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:05.316 07:03:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.316 07:03:48 -- common/autotest_common.sh@1210 -- # return 0 00:11:05.316 07:03:48 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.316 07:03:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.316 07:03:48 -- common/autotest_common.sh@10 -- # set +x 00:11:05.316 07:03:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.316 07:03:48 -- target/rpc.sh@81 -- # seq 1 5 00:11:05.316 07:03:48 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:05.316 07:03:48 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:05.316 07:03:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.316 07:03:48 -- common/autotest_common.sh@10 -- # set +x 00:11:05.316 07:03:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.316 07:03:48 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.316 07:03:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.316 07:03:48 -- common/autotest_common.sh@10 -- # set +x 00:11:05.316 [2024-07-11 07:03:48.891567] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.316 07:03:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.316 07:03:48 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:05.316 07:03:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.316 07:03:48 -- common/autotest_common.sh@10 -- # set +x 00:11:05.316 07:03:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.316 07:03:48 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:05.316 07:03:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:05.316 07:03:48 -- common/autotest_common.sh@10 -- # set +x 00:11:05.316 07:03:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:05.316 07:03:48 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.316 07:03:49 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.316 07:03:49 -- common/autotest_common.sh@1177 -- # local i=0 00:11:05.316 07:03:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.316 07:03:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:05.316 07:03:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:07.221 07:03:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:07.221 07:03:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:07.221 07:03:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.221 07:03:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:07.221 07:03:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.221 07:03:51 -- common/autotest_common.sh@1187 -- # return 0 00:11:07.221 07:03:51 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.221 07:03:51 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.221 07:03:51 -- common/autotest_common.sh@1198 -- # local i=0 00:11:07.221 07:03:51 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:07.221 07:03:51 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.221 07:03:51 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:07.221 07:03:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.221 07:03:51 -- common/autotest_common.sh@1210 -- # return 0 00:11:07.221 07:03:51 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:07.221 07:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:07.221 07:03:51 -- common/autotest_common.sh@10 -- # set +x 00:11:07.221 07:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:07.221 07:03:51 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.221 07:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:07.221 07:03:51 -- common/autotest_common.sh@10 -- # set +x 00:11:07.221 07:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:07.221 07:03:51 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:07.221 07:03:51 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.221 07:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:07.221 07:03:51 -- common/autotest_common.sh@10 -- # set +x 00:11:07.221 07:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:07.221 07:03:51 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.221 07:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:07.221 07:03:51 -- common/autotest_common.sh@10 -- # set +x 00:11:07.221 [2024-07-11 07:03:51.220350] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.221 07:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:07.221 07:03:51 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:07.221 07:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:07.221 07:03:51 -- common/autotest_common.sh@10 -- # set +x 00:11:07.221 07:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:07.221 07:03:51 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.221 07:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:07.221 07:03:51 -- common/autotest_common.sh@10 -- # set +x 00:11:07.221 07:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:07.221 07:03:51 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:07.480 07:03:51 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.480 07:03:51 -- common/autotest_common.sh@1177 -- # local i=0 00:11:07.480 07:03:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.480 07:03:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:07.480 07:03:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:09.383 07:03:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:09.383 07:03:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:09.383 07:03:53 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.383 07:03:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:09.383 07:03:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.383 07:03:53 -- common/autotest_common.sh@1187 -- # return 0 00:11:09.383 07:03:53 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.659 07:03:53 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:09.659 07:03:53 -- common/autotest_common.sh@1198 -- # local i=0 00:11:09.659 07:03:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:09.659 07:03:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.659 07:03:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.659 07:03:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:09.659 07:03:53 -- common/autotest_common.sh@1210 -- # return 0 00:11:09.659 07:03:53 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:09.659 07:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:09.659 07:03:53 -- common/autotest_common.sh@10 -- # set +x 00:11:09.659 07:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:09.659 07:03:53 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.659 07:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:09.659 07:03:53 -- common/autotest_common.sh@10 -- # set +x 00:11:09.659 07:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:09.659 07:03:53 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:09.659 07:03:53 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:09.659 07:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:09.659 07:03:53 -- common/autotest_common.sh@10 -- # set +x 00:11:09.659 07:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:09.659 07:03:53 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.659 07:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:09.659 07:03:53 -- common/autotest_common.sh@10 -- # set +x 00:11:09.659 [2024-07-11 07:03:53.521323] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.659 07:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:09.659 07:03:53 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:09.659 07:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:09.659 07:03:53 -- common/autotest_common.sh@10 -- # set +x 00:11:09.659 07:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:09.659 07:03:53 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:09.659 07:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:09.659 07:03:53 -- common/autotest_common.sh@10 -- # set +x 00:11:09.659 07:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:09.659 07:03:53 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.659 07:03:53 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:09.659 07:03:53 -- common/autotest_common.sh@1177 -- # local i=0 00:11:09.659 07:03:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.659 07:03:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:09.659 07:03:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:12.194 07:03:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:12.194 07:03:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:12.194 07:03:55 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.194 07:03:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:12.194 07:03:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.194 07:03:55 -- common/autotest_common.sh@1187 -- # return 0 00:11:12.194 07:03:55 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.194 07:03:55 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.194 07:03:55 -- common/autotest_common.sh@1198 -- # local i=0 00:11:12.194 07:03:55 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:12.194 07:03:55 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.194 07:03:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.194 07:03:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:12.194 07:03:55 -- common/autotest_common.sh@1210 -- # return 0 00:11:12.194 07:03:55 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:12.194 07:03:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.194 07:03:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.194 07:03:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.194 07:03:55 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.194 07:03:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.194 07:03:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.194 07:03:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.194 07:03:55 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:12.194 07:03:55 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:12.194 07:03:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.194 07:03:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.194 07:03:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.194 07:03:55 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.194 07:03:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.194 07:03:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.194 [2024-07-11 07:03:55.925907] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.194 07:03:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.194 07:03:55 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:12.194 07:03:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.194 07:03:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.194 07:03:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.194 07:03:55 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:12.194 07:03:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.194 07:03:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.194 07:03:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.194 07:03:55 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:12.194 07:03:56 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:12.194 07:03:56 -- common/autotest_common.sh@1177 -- # local i=0 00:11:12.194 07:03:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.194 07:03:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:12.194 07:03:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:14.096 07:03:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:14.096 07:03:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:14.096 07:03:58 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:14.096 07:03:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:14.096 07:03:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:14.096 07:03:58 -- common/autotest_common.sh@1187 -- # return 0 00:11:14.096 07:03:58 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.355 07:03:58 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.355 07:03:58 -- common/autotest_common.sh@1198 -- # local i=0 00:11:14.355 07:03:58 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:14.355 07:03:58 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.355 07:03:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.355 07:03:58 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:14.355 07:03:58 -- common/autotest_common.sh@1210 -- # return 0 00:11:14.355 07:03:58 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.355 07:03:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:14.355 07:03:58 -- common/autotest_common.sh@10 -- # set +x 00:11:14.355 07:03:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:14.355 07:03:58 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.355 07:03:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:14.355 07:03:58 -- common/autotest_common.sh@10 -- # set +x 00:11:14.355 07:03:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:14.355 07:03:58 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:14.355 07:03:58 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.355 07:03:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:14.355 07:03:58 -- common/autotest_common.sh@10 -- # set +x 00:11:14.355 07:03:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:14.355 07:03:58 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.355 07:03:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:14.355 07:03:58 -- common/autotest_common.sh@10 -- # set +x 00:11:14.355 [2024-07-11 07:03:58.238362] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.355 07:03:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:14.355 07:03:58 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:14.355 07:03:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:14.355 07:03:58 -- common/autotest_common.sh@10 -- # set +x 00:11:14.355 07:03:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:14.355 07:03:58 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.355 07:03:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:14.355 07:03:58 -- common/autotest_common.sh@10 -- # set +x 00:11:14.355 07:03:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:14.355 07:03:58 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:14.613 07:03:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:14.613 07:03:58 -- common/autotest_common.sh@1177 -- # local i=0 00:11:14.613 07:03:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.613 07:03:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:14.613 07:03:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:16.513 07:04:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:16.513 07:04:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:16.513 07:04:00 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.513 07:04:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:16.513 07:04:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.513 07:04:00 -- common/autotest_common.sh@1187 -- # return 0 00:11:16.513 07:04:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.513 07:04:00 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.513 07:04:00 -- common/autotest_common.sh@1198 -- # local i=0 00:11:16.513 07:04:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:16.513 07:04:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.513 07:04:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:16.513 07:04:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.513 07:04:00 -- common/autotest_common.sh@1210 -- # return 0 00:11:16.513 07:04:00 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.513 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.513 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.513 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.513 07:04:00 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.513 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.513 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.513 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.513 07:04:00 -- target/rpc.sh@99 -- # seq 1 5 00:11:16.513 07:04:00 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.513 07:04:00 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.513 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.513 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.513 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.513 07:04:00 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.513 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.513 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.513 [2024-07-11 07:04:00.558482] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.513 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.513 07:04:00 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.513 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.513 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.513 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.513 07:04:00 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.513 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.513 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.773 07:04:00 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 [2024-07-11 07:04:00.606480] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.773 07:04:00 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 [2024-07-11 07:04:00.654603] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.773 07:04:00 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 [2024-07-11 07:04:00.702666] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.773 07:04:00 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 [2024-07-11 07:04:00.750731] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.773 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.773 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.773 07:04:00 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.773 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.774 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.774 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.774 07:04:00 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.774 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.774 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.774 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.774 07:04:00 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.774 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.774 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.774 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.774 07:04:00 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:16.774 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.774 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:16.774 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.774 07:04:00 -- target/rpc.sh@110 -- # stats='{ 00:11:16.774 "poll_groups": [ 00:11:16.774 { 00:11:16.774 "admin_qpairs": 2, 00:11:16.774 "completed_nvme_io": 66, 00:11:16.774 "current_admin_qpairs": 0, 00:11:16.774 "current_io_qpairs": 0, 00:11:16.774 "io_qpairs": 16, 00:11:16.774 "name": "nvmf_tgt_poll_group_0", 00:11:16.774 "pending_bdev_io": 0, 00:11:16.774 "transports": [ 00:11:16.774 { 00:11:16.774 "trtype": "TCP" 00:11:16.774 } 00:11:16.774 ] 00:11:16.774 }, 00:11:16.774 { 00:11:16.774 "admin_qpairs": 3, 00:11:16.774 "completed_nvme_io": 69, 00:11:16.774 "current_admin_qpairs": 0, 00:11:16.774 "current_io_qpairs": 0, 00:11:16.774 "io_qpairs": 17, 00:11:16.774 "name": "nvmf_tgt_poll_group_1", 00:11:16.774 "pending_bdev_io": 0, 00:11:16.774 "transports": [ 00:11:16.774 { 00:11:16.774 "trtype": "TCP" 00:11:16.774 } 00:11:16.774 ] 00:11:16.774 }, 00:11:16.774 { 00:11:16.774 "admin_qpairs": 1, 00:11:16.774 "completed_nvme_io": 119, 00:11:16.774 "current_admin_qpairs": 0, 00:11:16.774 "current_io_qpairs": 0, 00:11:16.774 "io_qpairs": 19, 00:11:16.774 "name": "nvmf_tgt_poll_group_2", 00:11:16.774 "pending_bdev_io": 0, 00:11:16.774 "transports": [ 00:11:16.774 { 00:11:16.774 "trtype": "TCP" 00:11:16.774 } 00:11:16.774 ] 00:11:16.774 }, 00:11:16.774 { 00:11:16.774 "admin_qpairs": 1, 00:11:16.774 "completed_nvme_io": 166, 00:11:16.774 "current_admin_qpairs": 0, 00:11:16.774 "current_io_qpairs": 0, 00:11:16.774 "io_qpairs": 18, 00:11:16.774 "name": "nvmf_tgt_poll_group_3", 00:11:16.774 "pending_bdev_io": 0, 00:11:16.774 "transports": [ 00:11:16.774 { 00:11:16.774 "trtype": "TCP" 00:11:16.774 } 00:11:16.774 ] 00:11:16.774 } 00:11:16.774 ], 00:11:16.774 "tick_rate": 2200000000 00:11:16.774 }' 00:11:16.774 07:04:00 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:16.774 07:04:00 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:16.774 07:04:00 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:16.774 07:04:00 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:17.033 07:04:00 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:17.033 07:04:00 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:17.033 07:04:00 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:17.033 07:04:00 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:17.033 07:04:00 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:17.033 07:04:00 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:11:17.033 07:04:00 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:17.033 07:04:00 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:17.033 07:04:00 -- target/rpc.sh@123 -- # nvmftestfini 00:11:17.033 07:04:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:17.033 07:04:00 -- nvmf/common.sh@116 -- # sync 00:11:17.033 07:04:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:17.033 07:04:00 -- nvmf/common.sh@119 -- # set +e 00:11:17.033 07:04:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:17.033 07:04:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:17.033 rmmod nvme_tcp 00:11:17.033 rmmod nvme_fabrics 00:11:17.033 rmmod nvme_keyring 00:11:17.033 07:04:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:17.033 07:04:01 -- nvmf/common.sh@123 -- # set -e 00:11:17.033 07:04:01 -- nvmf/common.sh@124 -- # return 0 00:11:17.033 07:04:01 -- nvmf/common.sh@477 -- # '[' -n 65684 ']' 00:11:17.033 07:04:01 -- nvmf/common.sh@478 -- # killprocess 65684 00:11:17.033 07:04:01 -- common/autotest_common.sh@926 -- # '[' -z 65684 ']' 00:11:17.033 07:04:01 -- common/autotest_common.sh@930 -- # kill -0 65684 00:11:17.033 07:04:01 -- common/autotest_common.sh@931 -- # uname 00:11:17.033 07:04:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:17.033 07:04:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65684 00:11:17.033 07:04:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:17.033 07:04:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:17.033 07:04:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65684' 00:11:17.033 killing process with pid 65684 00:11:17.033 07:04:01 -- common/autotest_common.sh@945 -- # kill 65684 00:11:17.033 07:04:01 -- common/autotest_common.sh@950 -- # wait 65684 00:11:17.292 07:04:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:17.292 07:04:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:17.292 07:04:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:17.292 07:04:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:17.292 07:04:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:17.292 07:04:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.292 07:04:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.292 07:04:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.292 07:04:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:17.292 00:11:17.292 real 0m18.886s 00:11:17.292 user 1m11.955s 00:11:17.292 sys 0m1.985s 00:11:17.292 07:04:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.292 07:04:01 -- common/autotest_common.sh@10 -- # set +x 00:11:17.292 ************************************ 00:11:17.292 END TEST nvmf_rpc 00:11:17.292 ************************************ 00:11:17.551 07:04:01 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:17.551 07:04:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:17.551 07:04:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:17.551 07:04:01 -- common/autotest_common.sh@10 -- # set +x 00:11:17.551 ************************************ 00:11:17.551 START TEST nvmf_invalid 00:11:17.551 ************************************ 00:11:17.551 07:04:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:17.551 * Looking for test storage... 00:11:17.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:17.551 07:04:01 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:17.551 07:04:01 -- nvmf/common.sh@7 -- # uname -s 00:11:17.551 07:04:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.551 07:04:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.551 07:04:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.551 07:04:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.551 07:04:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.551 07:04:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.551 07:04:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.551 07:04:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.551 07:04:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.551 07:04:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.551 07:04:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:11:17.551 07:04:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:11:17.551 07:04:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.551 07:04:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.551 07:04:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:17.551 07:04:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.551 07:04:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.551 07:04:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.551 07:04:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.551 07:04:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.551 07:04:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.551 07:04:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.551 07:04:01 -- paths/export.sh@5 -- # export PATH 00:11:17.551 07:04:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.551 07:04:01 -- nvmf/common.sh@46 -- # : 0 00:11:17.551 07:04:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:17.551 07:04:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:17.551 07:04:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:17.551 07:04:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.551 07:04:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.551 07:04:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:17.551 07:04:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:17.551 07:04:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:17.551 07:04:01 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:17.551 07:04:01 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:17.551 07:04:01 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:17.551 07:04:01 -- target/invalid.sh@14 -- # target=foobar 00:11:17.551 07:04:01 -- target/invalid.sh@16 -- # RANDOM=0 00:11:17.551 07:04:01 -- target/invalid.sh@34 -- # nvmftestinit 00:11:17.551 07:04:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:17.551 07:04:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.551 07:04:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:17.551 07:04:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:17.551 07:04:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:17.551 07:04:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.551 07:04:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.551 07:04:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.551 07:04:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:17.551 07:04:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:17.551 07:04:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:17.551 07:04:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:17.551 07:04:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:17.551 07:04:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:17.551 07:04:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.551 07:04:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.551 07:04:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:17.551 07:04:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:17.551 07:04:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:17.551 07:04:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:17.551 07:04:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:17.552 07:04:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.552 07:04:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:17.552 07:04:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:17.552 07:04:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:17.552 07:04:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:17.552 07:04:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:17.552 07:04:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:17.552 Cannot find device "nvmf_tgt_br" 00:11:17.552 07:04:01 -- nvmf/common.sh@154 -- # true 00:11:17.552 07:04:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:17.552 Cannot find device "nvmf_tgt_br2" 00:11:17.552 07:04:01 -- nvmf/common.sh@155 -- # true 00:11:17.552 07:04:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:17.552 07:04:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:17.552 Cannot find device "nvmf_tgt_br" 00:11:17.552 07:04:01 -- nvmf/common.sh@157 -- # true 00:11:17.552 07:04:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:17.552 Cannot find device "nvmf_tgt_br2" 00:11:17.552 07:04:01 -- nvmf/common.sh@158 -- # true 00:11:17.552 07:04:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:17.552 07:04:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:17.552 07:04:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:17.552 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:17.552 07:04:01 -- nvmf/common.sh@161 -- # true 00:11:17.552 07:04:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:17.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:17.810 07:04:01 -- nvmf/common.sh@162 -- # true 00:11:17.810 07:04:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:17.810 07:04:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:17.810 07:04:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:17.810 07:04:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:17.810 07:04:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:17.810 07:04:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:17.810 07:04:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:17.810 07:04:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:17.810 07:04:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:17.810 07:04:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:17.810 07:04:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:17.810 07:04:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:17.810 07:04:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:17.810 07:04:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:17.810 07:04:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:17.810 07:04:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:17.810 07:04:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:17.810 07:04:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:17.810 07:04:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:17.810 07:04:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:17.810 07:04:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:17.810 07:04:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:17.810 07:04:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:17.810 07:04:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:17.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:11:17.810 00:11:17.810 --- 10.0.0.2 ping statistics --- 00:11:17.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.810 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:11:17.810 07:04:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:17.810 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:17.810 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:11:17.810 00:11:17.810 --- 10.0.0.3 ping statistics --- 00:11:17.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.810 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:17.810 07:04:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:17.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:11:17.810 00:11:17.810 --- 10.0.0.1 ping statistics --- 00:11:17.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.810 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:17.810 07:04:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.810 07:04:01 -- nvmf/common.sh@421 -- # return 0 00:11:17.810 07:04:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:17.810 07:04:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.810 07:04:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:17.810 07:04:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:17.810 07:04:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.810 07:04:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:17.810 07:04:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:17.810 07:04:01 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:17.810 07:04:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:17.810 07:04:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:17.810 07:04:01 -- common/autotest_common.sh@10 -- # set +x 00:11:17.810 07:04:01 -- nvmf/common.sh@469 -- # nvmfpid=66209 00:11:17.810 07:04:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.810 07:04:01 -- nvmf/common.sh@470 -- # waitforlisten 66209 00:11:17.810 07:04:01 -- common/autotest_common.sh@819 -- # '[' -z 66209 ']' 00:11:17.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.810 07:04:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.810 07:04:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:17.810 07:04:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.810 07:04:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:17.810 07:04:01 -- common/autotest_common.sh@10 -- # set +x 00:11:18.068 [2024-07-11 07:04:01.904203] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:18.068 [2024-07-11 07:04:01.904293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.068 [2024-07-11 07:04:02.035478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.068 [2024-07-11 07:04:02.105283] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:18.068 [2024-07-11 07:04:02.105409] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.068 [2024-07-11 07:04:02.105421] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.068 [2024-07-11 07:04:02.105429] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.068 [2024-07-11 07:04:02.105618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.068 [2024-07-11 07:04:02.105757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.068 [2024-07-11 07:04:02.105806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.068 [2024-07-11 07:04:02.105808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.003 07:04:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:19.003 07:04:02 -- common/autotest_common.sh@852 -- # return 0 00:11:19.003 07:04:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:19.003 07:04:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:19.003 07:04:02 -- common/autotest_common.sh@10 -- # set +x 00:11:19.003 07:04:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.003 07:04:02 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:19.003 07:04:02 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6241 00:11:19.003 [2024-07-11 07:04:03.024139] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:19.003 07:04:03 -- target/invalid.sh@40 -- # out='2024/07/11 07:04:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6241 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:19.003 request: 00:11:19.003 { 00:11:19.003 "method": "nvmf_create_subsystem", 00:11:19.003 "params": { 00:11:19.003 "nqn": "nqn.2016-06.io.spdk:cnode6241", 00:11:19.003 "tgt_name": "foobar" 00:11:19.003 } 00:11:19.003 } 00:11:19.003 Got JSON-RPC error response 00:11:19.003 GoRPCClient: error on JSON-RPC call' 00:11:19.004 07:04:03 -- target/invalid.sh@41 -- # [[ 2024/07/11 07:04:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6241 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:19.004 request: 00:11:19.004 { 00:11:19.004 "method": "nvmf_create_subsystem", 00:11:19.004 "params": { 00:11:19.004 "nqn": "nqn.2016-06.io.spdk:cnode6241", 00:11:19.004 "tgt_name": "foobar" 00:11:19.004 } 00:11:19.004 } 00:11:19.004 Got JSON-RPC error response 00:11:19.004 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:19.004 07:04:03 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:19.004 07:04:03 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26398 00:11:19.262 [2024-07-11 07:04:03.304590] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26398: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:19.520 07:04:03 -- target/invalid.sh@45 -- # out='2024/07/11 07:04:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26398 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:19.520 request: 00:11:19.520 { 00:11:19.520 "method": "nvmf_create_subsystem", 00:11:19.520 "params": { 00:11:19.520 "nqn": "nqn.2016-06.io.spdk:cnode26398", 00:11:19.520 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:19.520 } 00:11:19.520 } 00:11:19.520 Got JSON-RPC error response 00:11:19.520 GoRPCClient: error on JSON-RPC call' 00:11:19.520 07:04:03 -- target/invalid.sh@46 -- # [[ 2024/07/11 07:04:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26398 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:19.520 request: 00:11:19.520 { 00:11:19.520 "method": "nvmf_create_subsystem", 00:11:19.520 "params": { 00:11:19.520 "nqn": "nqn.2016-06.io.spdk:cnode26398", 00:11:19.521 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:19.521 } 00:11:19.521 } 00:11:19.521 Got JSON-RPC error response 00:11:19.521 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:19.521 07:04:03 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:19.521 07:04:03 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode21766 00:11:19.779 [2024-07-11 07:04:03.584987] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21766: invalid model number 'SPDK_Controller' 00:11:19.779 07:04:03 -- target/invalid.sh@50 -- # out='2024/07/11 07:04:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode21766], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:19.779 request: 00:11:19.779 { 00:11:19.779 "method": "nvmf_create_subsystem", 00:11:19.779 "params": { 00:11:19.779 "nqn": "nqn.2016-06.io.spdk:cnode21766", 00:11:19.779 "model_number": "SPDK_Controller\u001f" 00:11:19.779 } 00:11:19.779 } 00:11:19.779 Got JSON-RPC error response 00:11:19.779 GoRPCClient: error on JSON-RPC call' 00:11:19.779 07:04:03 -- target/invalid.sh@51 -- # [[ 2024/07/11 07:04:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode21766], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:19.779 request: 00:11:19.779 { 00:11:19.779 "method": "nvmf_create_subsystem", 00:11:19.779 "params": { 00:11:19.779 "nqn": "nqn.2016-06.io.spdk:cnode21766", 00:11:19.779 "model_number": "SPDK_Controller\u001f" 00:11:19.779 } 00:11:19.779 } 00:11:19.779 Got JSON-RPC error response 00:11:19.779 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:19.779 07:04:03 -- target/invalid.sh@54 -- # gen_random_s 21 00:11:19.779 07:04:03 -- target/invalid.sh@19 -- # local length=21 ll 00:11:19.779 07:04:03 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:19.779 07:04:03 -- target/invalid.sh@21 -- # local chars 00:11:19.779 07:04:03 -- target/invalid.sh@22 -- # local string 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 62 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # string+='>' 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 46 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # string+=. 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 43 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # string+=+ 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 101 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # string+=e 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 45 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # string+=- 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 123 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # string+='{' 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 116 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # string+=t 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 98 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # string+=b 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 109 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # string+=m 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 40 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # string+='(' 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 64 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # string+=@ 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 49 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # string+=1 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 52 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # string+=4 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.779 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # printf %x 114 00:11:19.779 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # string+=r 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # printf %x 76 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # string+=L 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # printf %x 97 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # string+=a 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # printf %x 48 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # string+=0 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # printf %x 120 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # string+=x 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # printf %x 41 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # string+=')' 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # printf %x 75 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # string+=K 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # printf %x 105 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:19.780 07:04:03 -- target/invalid.sh@25 -- # string+=i 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:19.780 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:19.780 07:04:03 -- target/invalid.sh@28 -- # [[ > == \- ]] 00:11:19.780 07:04:03 -- target/invalid.sh@31 -- # echo '>.+e-{tbm(@14rLa0x)Ki' 00:11:19.780 07:04:03 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '>.+e-{tbm(@14rLa0x)Ki' nqn.2016-06.io.spdk:cnode17899 00:11:20.047 [2024-07-11 07:04:03.969503] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17899: invalid serial number '>.+e-{tbm(@14rLa0x)Ki' 00:11:20.047 07:04:03 -- target/invalid.sh@54 -- # out='2024/07/11 07:04:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17899 serial_number:>.+e-{tbm(@14rLa0x)Ki], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN >.+e-{tbm(@14rLa0x)Ki 00:11:20.047 request: 00:11:20.047 { 00:11:20.047 "method": "nvmf_create_subsystem", 00:11:20.047 "params": { 00:11:20.047 "nqn": "nqn.2016-06.io.spdk:cnode17899", 00:11:20.047 "serial_number": ">.+e-{tbm(@14rLa0x)Ki" 00:11:20.047 } 00:11:20.047 } 00:11:20.047 Got JSON-RPC error response 00:11:20.047 GoRPCClient: error on JSON-RPC call' 00:11:20.047 07:04:03 -- target/invalid.sh@55 -- # [[ 2024/07/11 07:04:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17899 serial_number:>.+e-{tbm(@14rLa0x)Ki], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN >.+e-{tbm(@14rLa0x)Ki 00:11:20.047 request: 00:11:20.047 { 00:11:20.047 "method": "nvmf_create_subsystem", 00:11:20.047 "params": { 00:11:20.047 "nqn": "nqn.2016-06.io.spdk:cnode17899", 00:11:20.047 "serial_number": ">.+e-{tbm(@14rLa0x)Ki" 00:11:20.047 } 00:11:20.047 } 00:11:20.047 Got JSON-RPC error response 00:11:20.047 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:20.047 07:04:03 -- target/invalid.sh@58 -- # gen_random_s 41 00:11:20.047 07:04:03 -- target/invalid.sh@19 -- # local length=41 ll 00:11:20.047 07:04:03 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:20.047 07:04:03 -- target/invalid.sh@21 -- # local chars 00:11:20.047 07:04:03 -- target/invalid.sh@22 -- # local string 00:11:20.047 07:04:03 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:20.047 07:04:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 88 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=X 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 42 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+='*' 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 58 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=: 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 114 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=r 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 65 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=A 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 93 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=']' 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 50 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=2 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 46 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=. 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 88 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=X 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 44 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=, 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 100 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=d 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 67 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=C 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 50 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=2 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 77 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=M 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 127 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=$'\177' 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 61 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+== 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 115 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=s 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 56 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=8 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 59 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+=';' 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.047 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # printf %x 62 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:20.047 07:04:04 -- target/invalid.sh@25 -- # string+='>' 00:11:20.048 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.048 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.048 07:04:04 -- target/invalid.sh@25 -- # printf %x 70 00:11:20.048 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:20.048 07:04:04 -- target/invalid.sh@25 -- # string+=F 00:11:20.048 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.048 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.048 07:04:04 -- target/invalid.sh@25 -- # printf %x 112 00:11:20.048 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # string+=p 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # printf %x 56 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # string+=8 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # printf %x 43 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # string+=+ 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # printf %x 57 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # string+=9 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # printf %x 102 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # string+=f 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # printf %x 84 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # string+=T 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # printf %x 41 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # string+=')' 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # printf %x 124 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # string+='|' 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # printf %x 74 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # string+=J 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # printf %x 106 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # string+=j 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # printf %x 100 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # string+=d 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # printf %x 105 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # string+=i 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.342 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # printf %x 41 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:20.342 07:04:04 -- target/invalid.sh@25 -- # string+=')' 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # printf %x 69 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # string+=E 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # printf %x 44 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # string+=, 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # printf %x 109 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # string+=m 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # printf %x 63 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # string+='?' 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # printf %x 43 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # string+=+ 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # printf %x 102 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # string+=f 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # printf %x 64 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:20.343 07:04:04 -- target/invalid.sh@25 -- # string+=@ 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.343 07:04:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.343 07:04:04 -- target/invalid.sh@28 -- # [[ X == \- ]] 00:11:20.343 07:04:04 -- target/invalid.sh@31 -- # echo 'X*:rA]2.X,dC2M=s8;>Fp8+9fT)|Jjdi)E,m?+f@' 00:11:20.343 07:04:04 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'X*:rA]2.X,dC2M=s8;>Fp8+9fT)|Jjdi)E,m?+f@' nqn.2016-06.io.spdk:cnode21332 00:11:20.611 [2024-07-11 07:04:04.442183] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21332: invalid model number 'X*:rA]2.X,dC2M=s8;>Fp8+9fT)|Jjdi)E,m?+f@' 00:11:20.611 07:04:04 -- target/invalid.sh@58 -- # out='2024/07/11 07:04:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:X*:rA]2.X,dC2M=s8;>Fp8+9fT)|Jjdi)E,m?+f@ nqn:nqn.2016-06.io.spdk:cnode21332], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN X*:rA]2.X,dC2M=s8;>Fp8+9fT)|Jjdi)E,m?+f@ 00:11:20.611 request: 00:11:20.611 { 00:11:20.611 "method": "nvmf_create_subsystem", 00:11:20.611 "params": { 00:11:20.611 "nqn": "nqn.2016-06.io.spdk:cnode21332", 00:11:20.611 "model_number": "X*:rA]2.X,dC2M\u007f=s8;>Fp8+9fT)|Jjdi)E,m?+f@" 00:11:20.611 } 00:11:20.611 } 00:11:20.611 Got JSON-RPC error response 00:11:20.611 GoRPCClient: error on JSON-RPC call' 00:11:20.611 07:04:04 -- target/invalid.sh@59 -- # [[ 2024/07/11 07:04:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:X*:rA]2.X,dC2M=s8;>Fp8+9fT)|Jjdi)E,m?+f@ nqn:nqn.2016-06.io.spdk:cnode21332], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN X*:rA]2.X,dC2M=s8;>Fp8+9fT)|Jjdi)E,m?+f@ 00:11:20.611 request: 00:11:20.611 { 00:11:20.611 "method": "nvmf_create_subsystem", 00:11:20.611 "params": { 00:11:20.611 "nqn": "nqn.2016-06.io.spdk:cnode21332", 00:11:20.611 "model_number": "X*:rA]2.X,dC2M\u007f=s8;>Fp8+9fT)|Jjdi)E,m?+f@" 00:11:20.611 } 00:11:20.611 } 00:11:20.611 Got JSON-RPC error response 00:11:20.611 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:20.611 07:04:04 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:20.869 [2024-07-11 07:04:04.694713] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.869 07:04:04 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:20.869 07:04:04 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:20.869 07:04:04 -- target/invalid.sh@67 -- # echo '' 00:11:20.869 07:04:04 -- target/invalid.sh@67 -- # head -n 1 00:11:20.869 07:04:04 -- target/invalid.sh@67 -- # IP= 00:11:20.869 07:04:04 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:21.127 [2024-07-11 07:04:05.179619] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:21.386 07:04:05 -- target/invalid.sh@69 -- # out='2024/07/11 07:04:05 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:21.386 request: 00:11:21.386 { 00:11:21.386 "method": "nvmf_subsystem_remove_listener", 00:11:21.386 "params": { 00:11:21.386 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:21.386 "listen_address": { 00:11:21.386 "trtype": "tcp", 00:11:21.386 "traddr": "", 00:11:21.386 "trsvcid": "4421" 00:11:21.386 } 00:11:21.386 } 00:11:21.386 } 00:11:21.386 Got JSON-RPC error response 00:11:21.386 GoRPCClient: error on JSON-RPC call' 00:11:21.386 07:04:05 -- target/invalid.sh@70 -- # [[ 2024/07/11 07:04:05 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:21.386 request: 00:11:21.386 { 00:11:21.386 "method": "nvmf_subsystem_remove_listener", 00:11:21.386 "params": { 00:11:21.386 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:21.386 "listen_address": { 00:11:21.386 "trtype": "tcp", 00:11:21.386 "traddr": "", 00:11:21.386 "trsvcid": "4421" 00:11:21.386 } 00:11:21.386 } 00:11:21.386 } 00:11:21.386 Got JSON-RPC error response 00:11:21.386 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:21.386 07:04:05 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30255 -i 0 00:11:21.645 [2024-07-11 07:04:05.455978] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30255: invalid cntlid range [0-65519] 00:11:21.645 07:04:05 -- target/invalid.sh@73 -- # out='2024/07/11 07:04:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30255], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:21.645 request: 00:11:21.645 { 00:11:21.646 "method": "nvmf_create_subsystem", 00:11:21.646 "params": { 00:11:21.646 "nqn": "nqn.2016-06.io.spdk:cnode30255", 00:11:21.646 "min_cntlid": 0 00:11:21.646 } 00:11:21.646 } 00:11:21.646 Got JSON-RPC error response 00:11:21.646 GoRPCClient: error on JSON-RPC call' 00:11:21.646 07:04:05 -- target/invalid.sh@74 -- # [[ 2024/07/11 07:04:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30255], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:21.646 request: 00:11:21.646 { 00:11:21.646 "method": "nvmf_create_subsystem", 00:11:21.646 "params": { 00:11:21.646 "nqn": "nqn.2016-06.io.spdk:cnode30255", 00:11:21.646 "min_cntlid": 0 00:11:21.646 } 00:11:21.646 } 00:11:21.646 Got JSON-RPC error response 00:11:21.646 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:21.646 07:04:05 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16214 -i 65520 00:11:21.646 [2024-07-11 07:04:05.688284] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16214: invalid cntlid range [65520-65519] 00:11:21.905 07:04:05 -- target/invalid.sh@75 -- # out='2024/07/11 07:04:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16214], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:21.905 request: 00:11:21.905 { 00:11:21.905 "method": "nvmf_create_subsystem", 00:11:21.905 "params": { 00:11:21.905 "nqn": "nqn.2016-06.io.spdk:cnode16214", 00:11:21.905 "min_cntlid": 65520 00:11:21.905 } 00:11:21.905 } 00:11:21.905 Got JSON-RPC error response 00:11:21.905 GoRPCClient: error on JSON-RPC call' 00:11:21.905 07:04:05 -- target/invalid.sh@76 -- # [[ 2024/07/11 07:04:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16214], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:21.905 request: 00:11:21.905 { 00:11:21.905 "method": "nvmf_create_subsystem", 00:11:21.905 "params": { 00:11:21.905 "nqn": "nqn.2016-06.io.spdk:cnode16214", 00:11:21.905 "min_cntlid": 65520 00:11:21.905 } 00:11:21.905 } 00:11:21.905 Got JSON-RPC error response 00:11:21.905 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:21.905 07:04:05 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6973 -I 0 00:11:21.905 [2024-07-11 07:04:05.956619] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6973: invalid cntlid range [1-0] 00:11:22.164 07:04:05 -- target/invalid.sh@77 -- # out='2024/07/11 07:04:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode6973], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:22.164 request: 00:11:22.164 { 00:11:22.164 "method": "nvmf_create_subsystem", 00:11:22.164 "params": { 00:11:22.164 "nqn": "nqn.2016-06.io.spdk:cnode6973", 00:11:22.164 "max_cntlid": 0 00:11:22.164 } 00:11:22.164 } 00:11:22.164 Got JSON-RPC error response 00:11:22.164 GoRPCClient: error on JSON-RPC call' 00:11:22.164 07:04:05 -- target/invalid.sh@78 -- # [[ 2024/07/11 07:04:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode6973], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:22.164 request: 00:11:22.164 { 00:11:22.164 "method": "nvmf_create_subsystem", 00:11:22.164 "params": { 00:11:22.164 "nqn": "nqn.2016-06.io.spdk:cnode6973", 00:11:22.164 "max_cntlid": 0 00:11:22.164 } 00:11:22.164 } 00:11:22.164 Got JSON-RPC error response 00:11:22.164 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:22.164 07:04:05 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22557 -I 65520 00:11:22.164 [2024-07-11 07:04:06.209028] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22557: invalid cntlid range [1-65520] 00:11:22.423 07:04:06 -- target/invalid.sh@79 -- # out='2024/07/11 07:04:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode22557], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:22.423 request: 00:11:22.423 { 00:11:22.423 "method": "nvmf_create_subsystem", 00:11:22.423 "params": { 00:11:22.423 "nqn": "nqn.2016-06.io.spdk:cnode22557", 00:11:22.423 "max_cntlid": 65520 00:11:22.423 } 00:11:22.423 } 00:11:22.423 Got JSON-RPC error response 00:11:22.423 GoRPCClient: error on JSON-RPC call' 00:11:22.423 07:04:06 -- target/invalid.sh@80 -- # [[ 2024/07/11 07:04:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode22557], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:22.423 request: 00:11:22.423 { 00:11:22.423 "method": "nvmf_create_subsystem", 00:11:22.423 "params": { 00:11:22.423 "nqn": "nqn.2016-06.io.spdk:cnode22557", 00:11:22.423 "max_cntlid": 65520 00:11:22.423 } 00:11:22.423 } 00:11:22.423 Got JSON-RPC error response 00:11:22.423 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:22.423 07:04:06 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31395 -i 6 -I 5 00:11:22.423 [2024-07-11 07:04:06.413349] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31395: invalid cntlid range [6-5] 00:11:22.423 07:04:06 -- target/invalid.sh@83 -- # out='2024/07/11 07:04:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode31395], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:22.423 request: 00:11:22.423 { 00:11:22.423 "method": "nvmf_create_subsystem", 00:11:22.423 "params": { 00:11:22.423 "nqn": "nqn.2016-06.io.spdk:cnode31395", 00:11:22.423 "min_cntlid": 6, 00:11:22.423 "max_cntlid": 5 00:11:22.423 } 00:11:22.423 } 00:11:22.423 Got JSON-RPC error response 00:11:22.423 GoRPCClient: error on JSON-RPC call' 00:11:22.423 07:04:06 -- target/invalid.sh@84 -- # [[ 2024/07/11 07:04:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode31395], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:22.423 request: 00:11:22.423 { 00:11:22.424 "method": "nvmf_create_subsystem", 00:11:22.424 "params": { 00:11:22.424 "nqn": "nqn.2016-06.io.spdk:cnode31395", 00:11:22.424 "min_cntlid": 6, 00:11:22.424 "max_cntlid": 5 00:11:22.424 } 00:11:22.424 } 00:11:22.424 Got JSON-RPC error response 00:11:22.424 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:22.424 07:04:06 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:22.683 07:04:06 -- target/invalid.sh@87 -- # out='request: 00:11:22.683 { 00:11:22.683 "name": "foobar", 00:11:22.683 "method": "nvmf_delete_target", 00:11:22.683 "req_id": 1 00:11:22.683 } 00:11:22.683 Got JSON-RPC error response 00:11:22.683 response: 00:11:22.683 { 00:11:22.683 "code": -32602, 00:11:22.683 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:22.683 }' 00:11:22.683 07:04:06 -- target/invalid.sh@88 -- # [[ request: 00:11:22.683 { 00:11:22.683 "name": "foobar", 00:11:22.683 "method": "nvmf_delete_target", 00:11:22.683 "req_id": 1 00:11:22.683 } 00:11:22.683 Got JSON-RPC error response 00:11:22.683 response: 00:11:22.683 { 00:11:22.683 "code": -32602, 00:11:22.683 "message": "The specified target doesn't exist, cannot delete it." 00:11:22.683 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:22.683 07:04:06 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:22.683 07:04:06 -- target/invalid.sh@91 -- # nvmftestfini 00:11:22.683 07:04:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:22.683 07:04:06 -- nvmf/common.sh@116 -- # sync 00:11:22.683 07:04:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:22.683 07:04:06 -- nvmf/common.sh@119 -- # set +e 00:11:22.683 07:04:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:22.683 07:04:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:22.683 rmmod nvme_tcp 00:11:22.683 rmmod nvme_fabrics 00:11:22.683 rmmod nvme_keyring 00:11:22.683 07:04:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:22.683 07:04:06 -- nvmf/common.sh@123 -- # set -e 00:11:22.683 07:04:06 -- nvmf/common.sh@124 -- # return 0 00:11:22.683 07:04:06 -- nvmf/common.sh@477 -- # '[' -n 66209 ']' 00:11:22.683 07:04:06 -- nvmf/common.sh@478 -- # killprocess 66209 00:11:22.683 07:04:06 -- common/autotest_common.sh@926 -- # '[' -z 66209 ']' 00:11:22.683 07:04:06 -- common/autotest_common.sh@930 -- # kill -0 66209 00:11:22.683 07:04:06 -- common/autotest_common.sh@931 -- # uname 00:11:22.683 07:04:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:22.683 07:04:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66209 00:11:22.683 07:04:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:22.683 killing process with pid 66209 00:11:22.683 07:04:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:22.683 07:04:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66209' 00:11:22.683 07:04:06 -- common/autotest_common.sh@945 -- # kill 66209 00:11:22.683 07:04:06 -- common/autotest_common.sh@950 -- # wait 66209 00:11:22.942 07:04:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:22.942 07:04:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:22.942 07:04:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:22.942 07:04:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:22.942 07:04:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:22.942 07:04:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.942 07:04:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:22.942 07:04:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.942 07:04:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:22.942 00:11:22.942 real 0m5.522s 00:11:22.942 user 0m22.055s 00:11:22.942 sys 0m1.153s 00:11:22.942 07:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.942 07:04:06 -- common/autotest_common.sh@10 -- # set +x 00:11:22.942 ************************************ 00:11:22.942 END TEST nvmf_invalid 00:11:22.942 ************************************ 00:11:22.942 07:04:06 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:22.942 07:04:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:22.942 07:04:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.942 07:04:06 -- common/autotest_common.sh@10 -- # set +x 00:11:22.942 ************************************ 00:11:22.942 START TEST nvmf_abort 00:11:22.942 ************************************ 00:11:22.942 07:04:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:23.201 * Looking for test storage... 00:11:23.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:23.201 07:04:07 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:23.201 07:04:07 -- nvmf/common.sh@7 -- # uname -s 00:11:23.201 07:04:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.201 07:04:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.201 07:04:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.201 07:04:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.201 07:04:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.201 07:04:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.201 07:04:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.201 07:04:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.201 07:04:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.201 07:04:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.201 07:04:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:11:23.201 07:04:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:11:23.201 07:04:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.201 07:04:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.201 07:04:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:23.201 07:04:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:23.201 07:04:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.201 07:04:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.201 07:04:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.201 07:04:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.201 07:04:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.201 07:04:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.201 07:04:07 -- paths/export.sh@5 -- # export PATH 00:11:23.201 07:04:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.201 07:04:07 -- nvmf/common.sh@46 -- # : 0 00:11:23.201 07:04:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:23.201 07:04:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:23.201 07:04:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:23.201 07:04:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.201 07:04:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.201 07:04:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:23.201 07:04:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:23.201 07:04:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:23.201 07:04:07 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:23.201 07:04:07 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:23.201 07:04:07 -- target/abort.sh@14 -- # nvmftestinit 00:11:23.201 07:04:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:23.201 07:04:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.201 07:04:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:23.201 07:04:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:23.201 07:04:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:23.201 07:04:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.201 07:04:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.201 07:04:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.201 07:04:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:23.201 07:04:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:23.201 07:04:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:23.201 07:04:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:23.201 07:04:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:23.201 07:04:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:23.201 07:04:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.201 07:04:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.201 07:04:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:23.201 07:04:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:23.201 07:04:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:23.201 07:04:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:23.201 07:04:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:23.201 07:04:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.201 07:04:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:23.201 07:04:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:23.201 07:04:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:23.201 07:04:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:23.201 07:04:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:23.201 07:04:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:23.201 Cannot find device "nvmf_tgt_br" 00:11:23.201 07:04:07 -- nvmf/common.sh@154 -- # true 00:11:23.201 07:04:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:23.201 Cannot find device "nvmf_tgt_br2" 00:11:23.201 07:04:07 -- nvmf/common.sh@155 -- # true 00:11:23.201 07:04:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:23.201 07:04:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:23.201 Cannot find device "nvmf_tgt_br" 00:11:23.201 07:04:07 -- nvmf/common.sh@157 -- # true 00:11:23.201 07:04:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:23.201 Cannot find device "nvmf_tgt_br2" 00:11:23.201 07:04:07 -- nvmf/common.sh@158 -- # true 00:11:23.201 07:04:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:23.201 07:04:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:23.201 07:04:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:23.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:23.201 07:04:07 -- nvmf/common.sh@161 -- # true 00:11:23.201 07:04:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:23.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:23.201 07:04:07 -- nvmf/common.sh@162 -- # true 00:11:23.201 07:04:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:23.201 07:04:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:23.201 07:04:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:23.201 07:04:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:23.201 07:04:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:23.201 07:04:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:23.201 07:04:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:23.201 07:04:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:23.201 07:04:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:23.460 07:04:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:23.460 07:04:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:23.460 07:04:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:23.460 07:04:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:23.460 07:04:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:23.460 07:04:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:23.460 07:04:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:23.460 07:04:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:23.460 07:04:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:23.460 07:04:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:23.460 07:04:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:23.460 07:04:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:23.460 07:04:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:23.460 07:04:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:23.460 07:04:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:23.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:11:23.460 00:11:23.460 --- 10.0.0.2 ping statistics --- 00:11:23.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.460 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:11:23.460 07:04:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:23.460 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:23.460 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:11:23.460 00:11:23.460 --- 10.0.0.3 ping statistics --- 00:11:23.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.460 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:23.460 07:04:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:23.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:23.460 00:11:23.460 --- 10.0.0.1 ping statistics --- 00:11:23.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.460 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:23.460 07:04:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.460 07:04:07 -- nvmf/common.sh@421 -- # return 0 00:11:23.460 07:04:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:23.460 07:04:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.460 07:04:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:23.460 07:04:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:23.460 07:04:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.460 07:04:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:23.460 07:04:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:23.460 07:04:07 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:23.460 07:04:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:23.460 07:04:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:23.460 07:04:07 -- common/autotest_common.sh@10 -- # set +x 00:11:23.460 07:04:07 -- nvmf/common.sh@469 -- # nvmfpid=66707 00:11:23.460 07:04:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:23.460 07:04:07 -- nvmf/common.sh@470 -- # waitforlisten 66707 00:11:23.460 07:04:07 -- common/autotest_common.sh@819 -- # '[' -z 66707 ']' 00:11:23.460 07:04:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.460 07:04:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:23.460 07:04:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.460 07:04:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:23.460 07:04:07 -- common/autotest_common.sh@10 -- # set +x 00:11:23.460 [2024-07-11 07:04:07.448517] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:23.460 [2024-07-11 07:04:07.448610] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.718 [2024-07-11 07:04:07.586985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:23.718 [2024-07-11 07:04:07.671081] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:23.718 [2024-07-11 07:04:07.671227] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.718 [2024-07-11 07:04:07.671239] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.718 [2024-07-11 07:04:07.671247] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.718 [2024-07-11 07:04:07.671795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.718 [2024-07-11 07:04:07.671944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.718 [2024-07-11 07:04:07.671951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.286 07:04:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:24.286 07:04:08 -- common/autotest_common.sh@852 -- # return 0 00:11:24.286 07:04:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:24.286 07:04:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:24.286 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:11:24.286 07:04:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.286 07:04:08 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:24.286 07:04:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.286 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:11:24.286 [2024-07-11 07:04:08.321015] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.286 07:04:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.286 07:04:08 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:24.286 07:04:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.286 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:11:24.545 Malloc0 00:11:24.545 07:04:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.545 07:04:08 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:24.545 07:04:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.545 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:11:24.545 Delay0 00:11:24.545 07:04:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.545 07:04:08 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:24.545 07:04:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.545 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:11:24.545 07:04:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.545 07:04:08 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:24.545 07:04:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.545 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:11:24.545 07:04:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.545 07:04:08 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:24.545 07:04:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.545 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:11:24.545 [2024-07-11 07:04:08.390388] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.545 07:04:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.545 07:04:08 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:24.545 07:04:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.545 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:11:24.545 07:04:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.545 07:04:08 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:24.545 [2024-07-11 07:04:08.570372] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:27.107 Initializing NVMe Controllers 00:11:27.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:27.107 controller IO queue size 128 less than required 00:11:27.107 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:27.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:27.107 Initialization complete. Launching workers. 00:11:27.107 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 39037 00:11:27.107 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 39098, failed to submit 62 00:11:27.107 success 39037, unsuccess 61, failed 0 00:11:27.107 07:04:10 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:27.107 07:04:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.107 07:04:10 -- common/autotest_common.sh@10 -- # set +x 00:11:27.107 07:04:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.107 07:04:10 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:27.107 07:04:10 -- target/abort.sh@38 -- # nvmftestfini 00:11:27.107 07:04:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:27.107 07:04:10 -- nvmf/common.sh@116 -- # sync 00:11:27.107 07:04:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:27.107 07:04:10 -- nvmf/common.sh@119 -- # set +e 00:11:27.107 07:04:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:27.107 07:04:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:27.108 rmmod nvme_tcp 00:11:27.108 rmmod nvme_fabrics 00:11:27.108 rmmod nvme_keyring 00:11:27.108 07:04:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:27.108 07:04:10 -- nvmf/common.sh@123 -- # set -e 00:11:27.108 07:04:10 -- nvmf/common.sh@124 -- # return 0 00:11:27.108 07:04:10 -- nvmf/common.sh@477 -- # '[' -n 66707 ']' 00:11:27.108 07:04:10 -- nvmf/common.sh@478 -- # killprocess 66707 00:11:27.108 07:04:10 -- common/autotest_common.sh@926 -- # '[' -z 66707 ']' 00:11:27.108 07:04:10 -- common/autotest_common.sh@930 -- # kill -0 66707 00:11:27.108 07:04:10 -- common/autotest_common.sh@931 -- # uname 00:11:27.108 07:04:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:27.108 07:04:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66707 00:11:27.108 07:04:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:27.108 07:04:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:27.108 killing process with pid 66707 00:11:27.108 07:04:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66707' 00:11:27.108 07:04:10 -- common/autotest_common.sh@945 -- # kill 66707 00:11:27.108 07:04:10 -- common/autotest_common.sh@950 -- # wait 66707 00:11:27.108 07:04:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:27.108 07:04:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:27.108 07:04:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:27.108 07:04:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.108 07:04:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:27.108 07:04:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.108 07:04:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.108 07:04:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.108 07:04:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:27.108 00:11:27.108 real 0m4.091s 00:11:27.108 user 0m11.717s 00:11:27.108 sys 0m1.012s 00:11:27.108 07:04:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.108 07:04:11 -- common/autotest_common.sh@10 -- # set +x 00:11:27.108 ************************************ 00:11:27.108 END TEST nvmf_abort 00:11:27.108 ************************************ 00:11:27.108 07:04:11 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:27.108 07:04:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:27.108 07:04:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:27.108 07:04:11 -- common/autotest_common.sh@10 -- # set +x 00:11:27.108 ************************************ 00:11:27.108 START TEST nvmf_ns_hotplug_stress 00:11:27.108 ************************************ 00:11:27.108 07:04:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:27.366 * Looking for test storage... 00:11:27.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:27.366 07:04:11 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:27.366 07:04:11 -- nvmf/common.sh@7 -- # uname -s 00:11:27.366 07:04:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.366 07:04:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.366 07:04:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.366 07:04:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.366 07:04:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.366 07:04:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.366 07:04:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.366 07:04:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.366 07:04:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.366 07:04:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.366 07:04:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:11:27.366 07:04:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:11:27.366 07:04:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.366 07:04:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.366 07:04:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:27.366 07:04:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.366 07:04:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.366 07:04:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.366 07:04:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.366 07:04:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.366 07:04:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.366 07:04:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.366 07:04:11 -- paths/export.sh@5 -- # export PATH 00:11:27.366 07:04:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.366 07:04:11 -- nvmf/common.sh@46 -- # : 0 00:11:27.366 07:04:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:27.366 07:04:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:27.366 07:04:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:27.366 07:04:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.366 07:04:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.366 07:04:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:27.366 07:04:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:27.366 07:04:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:27.366 07:04:11 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.366 07:04:11 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:27.366 07:04:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:27.366 07:04:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.366 07:04:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:27.366 07:04:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:27.366 07:04:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:27.367 07:04:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.367 07:04:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.367 07:04:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.367 07:04:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:27.367 07:04:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:27.367 07:04:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:27.367 07:04:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:27.367 07:04:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:27.367 07:04:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:27.367 07:04:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.367 07:04:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.367 07:04:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:27.367 07:04:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:27.367 07:04:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:27.367 07:04:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:27.367 07:04:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:27.367 07:04:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.367 07:04:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:27.367 07:04:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:27.367 07:04:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:27.367 07:04:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:27.367 07:04:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:27.367 07:04:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:27.367 Cannot find device "nvmf_tgt_br" 00:11:27.367 07:04:11 -- nvmf/common.sh@154 -- # true 00:11:27.367 07:04:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.367 Cannot find device "nvmf_tgt_br2" 00:11:27.367 07:04:11 -- nvmf/common.sh@155 -- # true 00:11:27.367 07:04:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:27.367 07:04:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:27.367 Cannot find device "nvmf_tgt_br" 00:11:27.367 07:04:11 -- nvmf/common.sh@157 -- # true 00:11:27.367 07:04:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:27.367 Cannot find device "nvmf_tgt_br2" 00:11:27.367 07:04:11 -- nvmf/common.sh@158 -- # true 00:11:27.367 07:04:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:27.367 07:04:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:27.367 07:04:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.367 07:04:11 -- nvmf/common.sh@161 -- # true 00:11:27.367 07:04:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.367 07:04:11 -- nvmf/common.sh@162 -- # true 00:11:27.367 07:04:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:27.367 07:04:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:27.367 07:04:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:27.367 07:04:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:27.367 07:04:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:27.367 07:04:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:27.367 07:04:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:27.367 07:04:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:27.625 07:04:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:27.625 07:04:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:27.625 07:04:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:27.625 07:04:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:27.625 07:04:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:27.625 07:04:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:27.625 07:04:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:27.625 07:04:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:27.625 07:04:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:27.625 07:04:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:27.625 07:04:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:27.625 07:04:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:27.625 07:04:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:27.625 07:04:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:27.625 07:04:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:27.625 07:04:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:27.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:11:27.625 00:11:27.625 --- 10.0.0.2 ping statistics --- 00:11:27.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.625 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:11:27.625 07:04:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:27.625 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:27.625 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:11:27.625 00:11:27.625 --- 10.0.0.3 ping statistics --- 00:11:27.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.625 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:27.625 07:04:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:27.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:11:27.625 00:11:27.625 --- 10.0.0.1 ping statistics --- 00:11:27.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.625 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:11:27.625 07:04:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.625 07:04:11 -- nvmf/common.sh@421 -- # return 0 00:11:27.625 07:04:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:27.625 07:04:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.625 07:04:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:27.625 07:04:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:27.625 07:04:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.625 07:04:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:27.625 07:04:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:27.625 07:04:11 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:27.625 07:04:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:27.625 07:04:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:27.625 07:04:11 -- common/autotest_common.sh@10 -- # set +x 00:11:27.625 07:04:11 -- nvmf/common.sh@469 -- # nvmfpid=66963 00:11:27.625 07:04:11 -- nvmf/common.sh@470 -- # waitforlisten 66963 00:11:27.625 07:04:11 -- common/autotest_common.sh@819 -- # '[' -z 66963 ']' 00:11:27.625 07:04:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:27.625 07:04:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.625 07:04:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:27.625 07:04:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.625 07:04:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:27.625 07:04:11 -- common/autotest_common.sh@10 -- # set +x 00:11:27.625 [2024-07-11 07:04:11.630787] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:27.625 [2024-07-11 07:04:11.630877] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.884 [2024-07-11 07:04:11.772201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:27.884 [2024-07-11 07:04:11.887833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:27.884 [2024-07-11 07:04:11.888003] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.884 [2024-07-11 07:04:11.888020] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.884 [2024-07-11 07:04:11.888033] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.884 [2024-07-11 07:04:11.888359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.884 [2024-07-11 07:04:11.888536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.884 [2024-07-11 07:04:11.888550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.820 07:04:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:28.820 07:04:12 -- common/autotest_common.sh@852 -- # return 0 00:11:28.820 07:04:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:28.820 07:04:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:28.820 07:04:12 -- common/autotest_common.sh@10 -- # set +x 00:11:28.820 07:04:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.820 07:04:12 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:28.820 07:04:12 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:29.079 [2024-07-11 07:04:12.898924] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.079 07:04:12 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:29.337 07:04:13 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.596 [2024-07-11 07:04:13.417551] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.596 07:04:13 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:29.853 07:04:13 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:30.111 Malloc0 00:11:30.111 07:04:13 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:30.368 Delay0 00:11:30.368 07:04:14 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.368 07:04:14 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:30.626 NULL1 00:11:30.626 07:04:14 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:30.883 07:04:14 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:30.883 07:04:14 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=67100 00:11:30.883 07:04:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:30.883 07:04:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.260 Read completed with error (sct=0, sc=11) 00:11:32.260 07:04:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:32.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:32.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:32.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:32.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:32.260 07:04:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:32.260 07:04:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:32.520 true 00:11:32.520 07:04:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:32.520 07:04:16 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.455 07:04:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.455 07:04:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:33.455 07:04:17 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:33.713 true 00:11:33.713 07:04:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:33.713 07:04:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.972 07:04:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.231 07:04:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:34.231 07:04:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:34.231 true 00:11:34.231 07:04:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:34.231 07:04:18 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.607 07:04:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:35.607 07:04:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:35.607 07:04:19 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:35.865 true 00:11:35.865 07:04:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:35.865 07:04:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.802 07:04:20 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.802 07:04:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:36.802 07:04:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:37.060 true 00:11:37.060 07:04:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:37.060 07:04:20 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.319 07:04:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.319 07:04:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:37.319 07:04:21 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:37.577 true 00:11:37.577 07:04:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:37.577 07:04:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.513 07:04:22 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.772 07:04:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:38.772 07:04:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:39.031 true 00:11:39.031 07:04:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:39.031 07:04:22 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.292 07:04:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.550 07:04:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:39.550 07:04:23 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:39.808 true 00:11:39.808 07:04:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:39.808 07:04:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.741 07:04:24 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.000 07:04:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:41.000 07:04:24 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:41.000 true 00:11:41.000 07:04:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:41.000 07:04:25 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.258 07:04:25 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.517 07:04:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:41.517 07:04:25 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:41.776 true 00:11:41.776 07:04:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:41.776 07:04:25 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.711 07:04:26 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.970 07:04:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:42.970 07:04:26 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:42.970 true 00:11:43.229 07:04:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:43.229 07:04:27 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.229 07:04:27 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.488 07:04:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:43.488 07:04:27 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:43.746 true 00:11:43.746 07:04:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:43.746 07:04:27 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.744 07:04:28 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.009 07:04:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:45.009 07:04:28 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:45.009 true 00:11:45.009 07:04:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:45.010 07:04:29 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.268 07:04:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.526 07:04:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:45.526 07:04:29 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:45.785 true 00:11:45.785 07:04:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:45.785 07:04:29 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.719 07:04:30 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.978 07:04:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:46.978 07:04:30 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:47.235 true 00:11:47.235 07:04:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:47.235 07:04:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.235 07:04:31 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.493 07:04:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:47.493 07:04:31 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:47.752 true 00:11:47.752 07:04:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:47.752 07:04:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.686 07:04:32 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.944 07:04:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:48.944 07:04:32 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:49.202 true 00:11:49.202 07:04:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:49.202 07:04:33 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.460 07:04:33 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.718 07:04:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:49.718 07:04:33 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:49.718 true 00:11:49.718 07:04:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:49.718 07:04:33 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.652 07:04:34 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.910 07:04:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:50.910 07:04:34 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:51.170 true 00:11:51.170 07:04:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:51.170 07:04:35 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.430 07:04:35 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.430 07:04:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:51.430 07:04:35 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:51.689 true 00:11:51.689 07:04:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:51.689 07:04:35 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.625 07:04:36 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.884 07:04:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:52.884 07:04:36 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:53.143 true 00:11:53.143 07:04:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:53.143 07:04:37 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.402 07:04:37 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.660 07:04:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:53.660 07:04:37 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:53.919 true 00:11:53.919 07:04:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:53.919 07:04:37 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.855 07:04:38 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.855 07:04:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:54.855 07:04:38 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:55.114 true 00:11:55.114 07:04:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:55.114 07:04:39 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.373 07:04:39 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.632 07:04:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:55.632 07:04:39 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:55.632 true 00:11:55.890 07:04:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:55.890 07:04:39 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.827 07:04:40 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:56.827 07:04:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:56.827 07:04:40 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:57.085 true 00:11:57.085 07:04:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:57.085 07:04:41 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.344 07:04:41 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.603 07:04:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:57.603 07:04:41 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:57.862 true 00:11:57.862 07:04:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:57.862 07:04:41 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.799 07:04:42 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.058 07:04:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:59.058 07:04:42 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:59.058 true 00:11:59.058 07:04:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:59.058 07:04:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.317 07:04:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.576 07:04:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:59.576 07:04:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:59.833 true 00:11:59.833 07:04:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:11:59.833 07:04:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.767 07:04:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:01.025 07:04:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:01.025 07:04:44 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:01.025 Initializing NVMe Controllers 00:12:01.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:01.025 Controller IO queue size 128, less than required. 00:12:01.025 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:01.025 Controller IO queue size 128, less than required. 00:12:01.025 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:01.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:01.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:01.025 Initialization complete. Launching workers. 00:12:01.025 ======================================================== 00:12:01.025 Latency(us) 00:12:01.025 Device Information : IOPS MiB/s Average min max 00:12:01.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 461.84 0.23 160742.08 3723.94 1113387.08 00:12:01.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13295.68 6.49 9627.67 3267.82 535254.61 00:12:01.025 ======================================================== 00:12:01.025 Total : 13757.52 6.72 14700.57 3267.82 1113387.08 00:12:01.025 00:12:01.282 true 00:12:01.282 07:04:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67100 00:12:01.282 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (67100) - No such process 00:12:01.282 07:04:45 -- target/ns_hotplug_stress.sh@53 -- # wait 67100 00:12:01.282 07:04:45 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.541 07:04:45 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:01.799 07:04:45 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:01.799 07:04:45 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:01.799 07:04:45 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:01.799 07:04:45 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:01.799 07:04:45 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:02.058 null0 00:12:02.058 07:04:45 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:02.058 07:04:45 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:02.058 07:04:45 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:02.058 null1 00:12:02.058 07:04:46 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:02.058 07:04:46 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:02.058 07:04:46 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:02.317 null2 00:12:02.317 07:04:46 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:02.317 07:04:46 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:02.317 07:04:46 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:02.576 null3 00:12:02.576 07:04:46 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:02.576 07:04:46 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:02.576 07:04:46 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:02.834 null4 00:12:02.834 07:04:46 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:02.834 07:04:46 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:02.834 07:04:46 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:03.093 null5 00:12:03.093 07:04:46 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:03.093 07:04:46 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:03.093 07:04:46 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:03.351 null6 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:03.351 null7 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:03.351 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:03.609 07:04:47 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:03.610 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:03.610 07:04:47 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:03.610 07:04:47 -- target/ns_hotplug_stress.sh@66 -- # wait 68128 68130 68132 68134 68136 68137 68138 68142 00:12:03.610 07:04:47 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:03.610 07:04:47 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:03.610 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:03.610 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:03.610 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:03.610 07:04:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:03.610 07:04:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:03.868 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:04.131 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.131 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.131 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:04.131 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.131 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.131 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:04.131 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.131 07:04:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.131 07:04:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:04.131 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.131 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.131 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:04.131 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.131 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.131 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:04.131 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:04.131 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:04.131 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:04.389 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:04.389 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:04.389 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:04.389 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.389 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:04.389 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.389 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.389 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:04.389 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.389 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.389 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.646 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.647 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:04.647 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:04.647 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:04.647 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:04.904 07:04:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:05.161 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.420 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:05.678 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:05.954 07:04:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:06.239 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:06.240 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:06.240 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:06.240 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:06.240 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:06.502 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:06.760 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:07.018 07:04:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:07.018 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.018 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.018 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:07.018 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.018 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.018 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:07.018 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.018 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.018 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:07.018 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.018 07:04:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.018 07:04:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:07.018 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.018 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.018 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:07.018 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.018 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.018 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:07.277 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:07.277 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.277 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.277 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:07.277 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:07.277 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:07.277 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:07.277 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.277 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:07.277 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.537 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:07.796 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:07.796 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.796 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:07.796 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:07.796 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:07.796 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:07.796 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:07.796 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.796 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:07.796 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:07.796 07:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.055 07:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:08.055 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.055 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.055 07:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:08.055 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.055 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.055 07:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:08.055 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.055 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.055 07:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:08.055 07:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:08.315 07:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.315 07:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:08.315 07:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:08.315 07:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:08.315 07:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:08.315 07:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:08.315 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.315 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.315 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.315 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:08.573 07:04:52 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:08.573 07:04:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:08.573 07:04:52 -- nvmf/common.sh@116 -- # sync 00:12:08.573 07:04:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:08.573 07:04:52 -- nvmf/common.sh@119 -- # set +e 00:12:08.573 07:04:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:08.573 07:04:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:08.573 rmmod nvme_tcp 00:12:08.573 rmmod nvme_fabrics 00:12:08.573 rmmod nvme_keyring 00:12:08.832 07:04:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:08.832 07:04:52 -- nvmf/common.sh@123 -- # set -e 00:12:08.832 07:04:52 -- nvmf/common.sh@124 -- # return 0 00:12:08.832 07:04:52 -- nvmf/common.sh@477 -- # '[' -n 66963 ']' 00:12:08.832 07:04:52 -- nvmf/common.sh@478 -- # killprocess 66963 00:12:08.832 07:04:52 -- common/autotest_common.sh@926 -- # '[' -z 66963 ']' 00:12:08.832 07:04:52 -- common/autotest_common.sh@930 -- # kill -0 66963 00:12:08.832 07:04:52 -- common/autotest_common.sh@931 -- # uname 00:12:08.832 07:04:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:08.832 07:04:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66963 00:12:08.832 07:04:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:08.832 killing process with pid 66963 00:12:08.832 07:04:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:08.832 07:04:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66963' 00:12:08.832 07:04:52 -- common/autotest_common.sh@945 -- # kill 66963 00:12:08.832 07:04:52 -- common/autotest_common.sh@950 -- # wait 66963 00:12:09.091 07:04:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:09.091 07:04:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:09.091 07:04:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:09.091 07:04:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.091 07:04:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:09.091 07:04:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.091 07:04:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.091 07:04:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.091 07:04:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:09.091 00:12:09.091 real 0m41.935s 00:12:09.091 user 3m17.846s 00:12:09.091 sys 0m11.821s 00:12:09.091 07:04:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.091 07:04:53 -- common/autotest_common.sh@10 -- # set +x 00:12:09.091 ************************************ 00:12:09.091 END TEST nvmf_ns_hotplug_stress 00:12:09.091 ************************************ 00:12:09.091 07:04:53 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:09.091 07:04:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:09.091 07:04:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:09.091 07:04:53 -- common/autotest_common.sh@10 -- # set +x 00:12:09.091 ************************************ 00:12:09.091 START TEST nvmf_connect_stress 00:12:09.091 ************************************ 00:12:09.091 07:04:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:09.349 * Looking for test storage... 00:12:09.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:09.349 07:04:53 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:09.349 07:04:53 -- nvmf/common.sh@7 -- # uname -s 00:12:09.349 07:04:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.349 07:04:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.349 07:04:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.349 07:04:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.349 07:04:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.349 07:04:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.349 07:04:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.349 07:04:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.349 07:04:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.349 07:04:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.349 07:04:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:12:09.350 07:04:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:12:09.350 07:04:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.350 07:04:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.350 07:04:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:09.350 07:04:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:09.350 07:04:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.350 07:04:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.350 07:04:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.350 07:04:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.350 07:04:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.350 07:04:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.350 07:04:53 -- paths/export.sh@5 -- # export PATH 00:12:09.350 07:04:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.350 07:04:53 -- nvmf/common.sh@46 -- # : 0 00:12:09.350 07:04:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:09.350 07:04:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:09.350 07:04:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:09.350 07:04:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.350 07:04:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.350 07:04:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:09.350 07:04:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:09.350 07:04:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:09.350 07:04:53 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:09.350 07:04:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:09.350 07:04:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.350 07:04:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:09.350 07:04:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:09.350 07:04:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:09.350 07:04:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.350 07:04:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.350 07:04:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.350 07:04:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:09.350 07:04:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:09.350 07:04:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:09.350 07:04:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:09.350 07:04:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:09.350 07:04:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:09.350 07:04:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.350 07:04:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.350 07:04:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:09.350 07:04:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:09.350 07:04:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:09.350 07:04:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:09.350 07:04:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:09.350 07:04:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.350 07:04:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:09.350 07:04:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:09.350 07:04:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:09.350 07:04:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:09.350 07:04:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:09.350 07:04:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:09.350 Cannot find device "nvmf_tgt_br" 00:12:09.350 07:04:53 -- nvmf/common.sh@154 -- # true 00:12:09.350 07:04:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:09.350 Cannot find device "nvmf_tgt_br2" 00:12:09.350 07:04:53 -- nvmf/common.sh@155 -- # true 00:12:09.350 07:04:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:09.350 07:04:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:09.350 Cannot find device "nvmf_tgt_br" 00:12:09.350 07:04:53 -- nvmf/common.sh@157 -- # true 00:12:09.350 07:04:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:09.350 Cannot find device "nvmf_tgt_br2" 00:12:09.350 07:04:53 -- nvmf/common.sh@158 -- # true 00:12:09.350 07:04:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:09.350 07:04:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:09.350 07:04:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:09.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.350 07:04:53 -- nvmf/common.sh@161 -- # true 00:12:09.350 07:04:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:09.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.350 07:04:53 -- nvmf/common.sh@162 -- # true 00:12:09.350 07:04:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:09.350 07:04:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:09.350 07:04:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:09.350 07:04:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:09.350 07:04:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:09.350 07:04:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:09.350 07:04:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:09.350 07:04:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:09.350 07:04:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:09.350 07:04:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:09.350 07:04:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:09.350 07:04:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:09.350 07:04:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:09.608 07:04:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:09.608 07:04:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:09.608 07:04:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:09.608 07:04:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:09.608 07:04:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:09.608 07:04:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:09.608 07:04:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:09.608 07:04:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:09.608 07:04:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:09.608 07:04:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:09.608 07:04:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:09.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:12:09.608 00:12:09.608 --- 10.0.0.2 ping statistics --- 00:12:09.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.608 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:12:09.608 07:04:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:09.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:09.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:12:09.608 00:12:09.608 --- 10.0.0.3 ping statistics --- 00:12:09.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.608 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:09.608 07:04:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:09.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:12:09.608 00:12:09.608 --- 10.0.0.1 ping statistics --- 00:12:09.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.608 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:09.608 07:04:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.608 07:04:53 -- nvmf/common.sh@421 -- # return 0 00:12:09.608 07:04:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:09.608 07:04:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.608 07:04:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:09.608 07:04:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:09.608 07:04:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.608 07:04:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:09.608 07:04:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:09.609 07:04:53 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:09.609 07:04:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:09.609 07:04:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:09.609 07:04:53 -- common/autotest_common.sh@10 -- # set +x 00:12:09.609 07:04:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:09.609 07:04:53 -- nvmf/common.sh@469 -- # nvmfpid=69435 00:12:09.609 07:04:53 -- nvmf/common.sh@470 -- # waitforlisten 69435 00:12:09.609 07:04:53 -- common/autotest_common.sh@819 -- # '[' -z 69435 ']' 00:12:09.609 07:04:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.609 07:04:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:09.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.609 07:04:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.609 07:04:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:09.609 07:04:53 -- common/autotest_common.sh@10 -- # set +x 00:12:09.609 [2024-07-11 07:04:53.574339] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:09.609 [2024-07-11 07:04:53.574420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.866 [2024-07-11 07:04:53.707545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:09.866 [2024-07-11 07:04:53.789182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:09.866 [2024-07-11 07:04:53.789336] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.866 [2024-07-11 07:04:53.789349] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.866 [2024-07-11 07:04:53.789358] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.866 [2024-07-11 07:04:53.789526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.866 [2024-07-11 07:04:53.790103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.866 [2024-07-11 07:04:53.790141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.432 07:04:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:10.432 07:04:54 -- common/autotest_common.sh@852 -- # return 0 00:12:10.432 07:04:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:10.432 07:04:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:10.432 07:04:54 -- common/autotest_common.sh@10 -- # set +x 00:12:10.690 07:04:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.690 07:04:54 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:10.690 07:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.690 07:04:54 -- common/autotest_common.sh@10 -- # set +x 00:12:10.690 [2024-07-11 07:04:54.535654] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.690 07:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:10.690 07:04:54 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:10.690 07:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.690 07:04:54 -- common/autotest_common.sh@10 -- # set +x 00:12:10.690 07:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:10.690 07:04:54 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.690 07:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.690 07:04:54 -- common/autotest_common.sh@10 -- # set +x 00:12:10.690 [2024-07-11 07:04:54.553352] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.690 07:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:10.690 07:04:54 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:10.690 07:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.690 07:04:54 -- common/autotest_common.sh@10 -- # set +x 00:12:10.690 NULL1 00:12:10.690 07:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:10.690 07:04:54 -- target/connect_stress.sh@21 -- # PERF_PID=69488 00:12:10.690 07:04:54 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:10.690 07:04:54 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:10.690 07:04:54 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:10.690 07:04:54 -- target/connect_stress.sh@27 -- # seq 1 20 00:12:10.690 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.690 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.690 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.690 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.690 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.690 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.690 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.690 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.690 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.690 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.690 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.690 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.690 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.690 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.690 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.690 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.691 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.691 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.691 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.691 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.691 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.691 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.691 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.691 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.691 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.691 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.691 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.691 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.691 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.691 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.691 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.691 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.691 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.691 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.691 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.691 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.691 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.691 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.691 07:04:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:10.691 07:04:54 -- target/connect_stress.sh@28 -- # cat 00:12:10.691 07:04:54 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:10.691 07:04:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.691 07:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.691 07:04:54 -- common/autotest_common.sh@10 -- # set +x 00:12:10.948 07:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:10.948 07:04:54 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:10.948 07:04:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.948 07:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.948 07:04:54 -- common/autotest_common.sh@10 -- # set +x 00:12:11.515 07:04:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:11.515 07:04:55 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:11.515 07:04:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.515 07:04:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:11.515 07:04:55 -- common/autotest_common.sh@10 -- # set +x 00:12:11.773 07:04:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:11.774 07:04:55 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:11.774 07:04:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.774 07:04:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:11.774 07:04:55 -- common/autotest_common.sh@10 -- # set +x 00:12:12.032 07:04:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:12.032 07:04:55 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:12.032 07:04:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.032 07:04:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:12.032 07:04:55 -- common/autotest_common.sh@10 -- # set +x 00:12:12.292 07:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:12.292 07:04:56 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:12.292 07:04:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.292 07:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:12.292 07:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:12.551 07:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:12.551 07:04:56 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:12.551 07:04:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.551 07:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:12.551 07:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:13.119 07:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.119 07:04:56 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:13.119 07:04:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.119 07:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.119 07:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:13.377 07:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.378 07:04:57 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:13.378 07:04:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.378 07:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.378 07:04:57 -- common/autotest_common.sh@10 -- # set +x 00:12:13.636 07:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.636 07:04:57 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:13.636 07:04:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.636 07:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.636 07:04:57 -- common/autotest_common.sh@10 -- # set +x 00:12:13.894 07:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.894 07:04:57 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:13.894 07:04:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.894 07:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.895 07:04:57 -- common/autotest_common.sh@10 -- # set +x 00:12:14.153 07:04:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.153 07:04:58 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:14.153 07:04:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.153 07:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.153 07:04:58 -- common/autotest_common.sh@10 -- # set +x 00:12:14.720 07:04:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.720 07:04:58 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:14.720 07:04:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.720 07:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.720 07:04:58 -- common/autotest_common.sh@10 -- # set +x 00:12:14.979 07:04:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.979 07:04:58 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:14.979 07:04:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.979 07:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.979 07:04:58 -- common/autotest_common.sh@10 -- # set +x 00:12:15.237 07:04:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:15.237 07:04:59 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:15.237 07:04:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.237 07:04:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:15.237 07:04:59 -- common/autotest_common.sh@10 -- # set +x 00:12:15.495 07:04:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:15.495 07:04:59 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:15.495 07:04:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.495 07:04:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:15.495 07:04:59 -- common/autotest_common.sh@10 -- # set +x 00:12:15.754 07:04:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:15.754 07:04:59 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:15.754 07:04:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.754 07:04:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:15.754 07:04:59 -- common/autotest_common.sh@10 -- # set +x 00:12:16.320 07:05:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.320 07:05:00 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:16.320 07:05:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.320 07:05:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.320 07:05:00 -- common/autotest_common.sh@10 -- # set +x 00:12:16.579 07:05:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.579 07:05:00 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:16.579 07:05:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.579 07:05:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.579 07:05:00 -- common/autotest_common.sh@10 -- # set +x 00:12:16.838 07:05:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.838 07:05:00 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:16.838 07:05:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.838 07:05:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.838 07:05:00 -- common/autotest_common.sh@10 -- # set +x 00:12:17.096 07:05:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:17.096 07:05:01 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:17.096 07:05:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.096 07:05:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:17.096 07:05:01 -- common/autotest_common.sh@10 -- # set +x 00:12:17.665 07:05:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:17.665 07:05:01 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:17.665 07:05:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.665 07:05:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:17.665 07:05:01 -- common/autotest_common.sh@10 -- # set +x 00:12:17.924 07:05:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:17.924 07:05:01 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:17.924 07:05:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.924 07:05:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:17.924 07:05:01 -- common/autotest_common.sh@10 -- # set +x 00:12:18.182 07:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:18.182 07:05:02 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:18.182 07:05:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.182 07:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:18.182 07:05:02 -- common/autotest_common.sh@10 -- # set +x 00:12:18.441 07:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:18.441 07:05:02 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:18.441 07:05:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.441 07:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:18.441 07:05:02 -- common/autotest_common.sh@10 -- # set +x 00:12:18.700 07:05:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:18.700 07:05:02 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:18.700 07:05:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.700 07:05:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:18.700 07:05:02 -- common/autotest_common.sh@10 -- # set +x 00:12:19.268 07:05:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.268 07:05:03 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:19.269 07:05:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.269 07:05:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.269 07:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:19.528 07:05:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.528 07:05:03 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:19.528 07:05:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.528 07:05:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.528 07:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:19.786 07:05:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.786 07:05:03 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:19.786 07:05:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.786 07:05:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.786 07:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:20.044 07:05:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.044 07:05:03 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:20.044 07:05:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.044 07:05:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.044 07:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:20.303 07:05:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.303 07:05:04 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:20.303 07:05:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.303 07:05:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.303 07:05:04 -- common/autotest_common.sh@10 -- # set +x 00:12:20.870 07:05:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.870 07:05:04 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:20.870 07:05:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.870 07:05:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.870 07:05:04 -- common/autotest_common.sh@10 -- # set +x 00:12:20.870 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:21.128 07:05:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:21.128 07:05:04 -- target/connect_stress.sh@34 -- # kill -0 69488 00:12:21.128 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (69488) - No such process 00:12:21.128 07:05:04 -- target/connect_stress.sh@38 -- # wait 69488 00:12:21.128 07:05:04 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:21.128 07:05:04 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:21.128 07:05:04 -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:21.128 07:05:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:21.128 07:05:04 -- nvmf/common.sh@116 -- # sync 00:12:21.128 07:05:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:21.129 07:05:04 -- nvmf/common.sh@119 -- # set +e 00:12:21.129 07:05:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:21.129 07:05:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:21.129 rmmod nvme_tcp 00:12:21.129 rmmod nvme_fabrics 00:12:21.129 rmmod nvme_keyring 00:12:21.129 07:05:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:21.129 07:05:05 -- nvmf/common.sh@123 -- # set -e 00:12:21.129 07:05:05 -- nvmf/common.sh@124 -- # return 0 00:12:21.129 07:05:05 -- nvmf/common.sh@477 -- # '[' -n 69435 ']' 00:12:21.129 07:05:05 -- nvmf/common.sh@478 -- # killprocess 69435 00:12:21.129 07:05:05 -- common/autotest_common.sh@926 -- # '[' -z 69435 ']' 00:12:21.129 07:05:05 -- common/autotest_common.sh@930 -- # kill -0 69435 00:12:21.129 07:05:05 -- common/autotest_common.sh@931 -- # uname 00:12:21.129 07:05:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:21.129 07:05:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69435 00:12:21.129 07:05:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:21.129 07:05:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:21.129 killing process with pid 69435 00:12:21.129 07:05:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69435' 00:12:21.129 07:05:05 -- common/autotest_common.sh@945 -- # kill 69435 00:12:21.129 07:05:05 -- common/autotest_common.sh@950 -- # wait 69435 00:12:21.388 07:05:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:21.388 07:05:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:21.388 07:05:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:21.388 07:05:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.388 07:05:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:21.388 07:05:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.388 07:05:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.388 07:05:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.388 07:05:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:21.388 00:12:21.388 real 0m12.316s 00:12:21.388 user 0m41.556s 00:12:21.388 sys 0m2.861s 00:12:21.388 07:05:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.388 07:05:05 -- common/autotest_common.sh@10 -- # set +x 00:12:21.388 ************************************ 00:12:21.388 END TEST nvmf_connect_stress 00:12:21.388 ************************************ 00:12:21.388 07:05:05 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:21.388 07:05:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:21.388 07:05:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:21.388 07:05:05 -- common/autotest_common.sh@10 -- # set +x 00:12:21.647 ************************************ 00:12:21.647 START TEST nvmf_fused_ordering 00:12:21.647 ************************************ 00:12:21.647 07:05:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:21.647 * Looking for test storage... 00:12:21.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:21.647 07:05:05 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:21.647 07:05:05 -- nvmf/common.sh@7 -- # uname -s 00:12:21.647 07:05:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.647 07:05:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.647 07:05:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.647 07:05:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.647 07:05:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.647 07:05:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.647 07:05:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.647 07:05:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.647 07:05:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.647 07:05:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.647 07:05:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:12:21.647 07:05:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:12:21.647 07:05:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.647 07:05:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.647 07:05:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:21.647 07:05:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:21.647 07:05:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.647 07:05:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.647 07:05:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.647 07:05:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.647 07:05:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.647 07:05:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.647 07:05:05 -- paths/export.sh@5 -- # export PATH 00:12:21.647 07:05:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.647 07:05:05 -- nvmf/common.sh@46 -- # : 0 00:12:21.647 07:05:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:21.647 07:05:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:21.647 07:05:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:21.647 07:05:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.647 07:05:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.647 07:05:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:21.647 07:05:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:21.647 07:05:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:21.647 07:05:05 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:21.647 07:05:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:21.647 07:05:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.647 07:05:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:21.647 07:05:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:21.647 07:05:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:21.647 07:05:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.647 07:05:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.647 07:05:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.647 07:05:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:21.647 07:05:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:21.647 07:05:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:21.647 07:05:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:21.647 07:05:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:21.647 07:05:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:21.647 07:05:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.647 07:05:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.647 07:05:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:21.647 07:05:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:21.647 07:05:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:21.647 07:05:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:21.647 07:05:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:21.647 07:05:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.647 07:05:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:21.647 07:05:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:21.647 07:05:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:21.647 07:05:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:21.647 07:05:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:21.647 07:05:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:21.647 Cannot find device "nvmf_tgt_br" 00:12:21.647 07:05:05 -- nvmf/common.sh@154 -- # true 00:12:21.648 07:05:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:21.648 Cannot find device "nvmf_tgt_br2" 00:12:21.648 07:05:05 -- nvmf/common.sh@155 -- # true 00:12:21.648 07:05:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:21.648 07:05:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:21.648 Cannot find device "nvmf_tgt_br" 00:12:21.648 07:05:05 -- nvmf/common.sh@157 -- # true 00:12:21.648 07:05:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:21.648 Cannot find device "nvmf_tgt_br2" 00:12:21.648 07:05:05 -- nvmf/common.sh@158 -- # true 00:12:21.648 07:05:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:21.648 07:05:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:21.648 07:05:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:21.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.648 07:05:05 -- nvmf/common.sh@161 -- # true 00:12:21.648 07:05:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:21.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.648 07:05:05 -- nvmf/common.sh@162 -- # true 00:12:21.648 07:05:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:21.648 07:05:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:21.648 07:05:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:21.907 07:05:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:21.907 07:05:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:21.907 07:05:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:21.907 07:05:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:21.907 07:05:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:21.907 07:05:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:21.907 07:05:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:21.907 07:05:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:21.907 07:05:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:21.907 07:05:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:21.907 07:05:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:21.907 07:05:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:21.907 07:05:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:21.907 07:05:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:21.907 07:05:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:21.907 07:05:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:21.907 07:05:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:21.907 07:05:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:21.907 07:05:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:21.907 07:05:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:21.907 07:05:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:21.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:12:21.907 00:12:21.907 --- 10.0.0.2 ping statistics --- 00:12:21.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.907 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:21.907 07:05:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:21.907 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:21.907 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:12:21.907 00:12:21.907 --- 10.0.0.3 ping statistics --- 00:12:21.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.907 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:21.907 07:05:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:21.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:21.907 00:12:21.907 --- 10.0.0.1 ping statistics --- 00:12:21.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.907 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:21.907 07:05:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.907 07:05:05 -- nvmf/common.sh@421 -- # return 0 00:12:21.907 07:05:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:21.907 07:05:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.907 07:05:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:21.907 07:05:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:21.907 07:05:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.907 07:05:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:21.907 07:05:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:21.907 07:05:05 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:21.907 07:05:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:21.907 07:05:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:21.907 07:05:05 -- common/autotest_common.sh@10 -- # set +x 00:12:21.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.907 07:05:05 -- nvmf/common.sh@469 -- # nvmfpid=69817 00:12:21.907 07:05:05 -- nvmf/common.sh@470 -- # waitforlisten 69817 00:12:21.907 07:05:05 -- common/autotest_common.sh@819 -- # '[' -z 69817 ']' 00:12:21.907 07:05:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.907 07:05:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:21.907 07:05:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.907 07:05:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:21.907 07:05:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:21.907 07:05:05 -- common/autotest_common.sh@10 -- # set +x 00:12:21.907 [2024-07-11 07:05:05.946758] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:21.907 [2024-07-11 07:05:05.946836] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.166 [2024-07-11 07:05:06.079016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.166 [2024-07-11 07:05:06.161864] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:22.166 [2024-07-11 07:05:06.162009] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.166 [2024-07-11 07:05:06.162020] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.166 [2024-07-11 07:05:06.162028] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.166 [2024-07-11 07:05:06.162062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.828 07:05:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:22.828 07:05:06 -- common/autotest_common.sh@852 -- # return 0 00:12:22.828 07:05:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:22.828 07:05:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:22.828 07:05:06 -- common/autotest_common.sh@10 -- # set +x 00:12:22.828 07:05:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.828 07:05:06 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.828 07:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:22.828 07:05:06 -- common/autotest_common.sh@10 -- # set +x 00:12:22.828 [2024-07-11 07:05:06.884716] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.087 07:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.087 07:05:06 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:23.087 07:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.087 07:05:06 -- common/autotest_common.sh@10 -- # set +x 00:12:23.087 07:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.087 07:05:06 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.087 07:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.087 07:05:06 -- common/autotest_common.sh@10 -- # set +x 00:12:23.087 [2024-07-11 07:05:06.900840] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.087 07:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.087 07:05:06 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:23.087 07:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.087 07:05:06 -- common/autotest_common.sh@10 -- # set +x 00:12:23.087 NULL1 00:12:23.087 07:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.087 07:05:06 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:23.087 07:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.087 07:05:06 -- common/autotest_common.sh@10 -- # set +x 00:12:23.087 07:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.087 07:05:06 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:23.087 07:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.087 07:05:06 -- common/autotest_common.sh@10 -- # set +x 00:12:23.087 07:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.087 07:05:06 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:23.088 [2024-07-11 07:05:06.951384] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:23.088 [2024-07-11 07:05:06.951439] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69867 ] 00:12:23.346 Attached to nqn.2016-06.io.spdk:cnode1 00:12:23.346 Namespace ID: 1 size: 1GB 00:12:23.346 fused_ordering(0) 00:12:23.346 fused_ordering(1) 00:12:23.346 fused_ordering(2) 00:12:23.346 fused_ordering(3) 00:12:23.346 fused_ordering(4) 00:12:23.346 fused_ordering(5) 00:12:23.346 fused_ordering(6) 00:12:23.346 fused_ordering(7) 00:12:23.346 fused_ordering(8) 00:12:23.346 fused_ordering(9) 00:12:23.346 fused_ordering(10) 00:12:23.346 fused_ordering(11) 00:12:23.346 fused_ordering(12) 00:12:23.346 fused_ordering(13) 00:12:23.346 fused_ordering(14) 00:12:23.346 fused_ordering(15) 00:12:23.346 fused_ordering(16) 00:12:23.346 fused_ordering(17) 00:12:23.346 fused_ordering(18) 00:12:23.346 fused_ordering(19) 00:12:23.346 fused_ordering(20) 00:12:23.346 fused_ordering(21) 00:12:23.346 fused_ordering(22) 00:12:23.346 fused_ordering(23) 00:12:23.346 fused_ordering(24) 00:12:23.346 fused_ordering(25) 00:12:23.346 fused_ordering(26) 00:12:23.346 fused_ordering(27) 00:12:23.346 fused_ordering(28) 00:12:23.346 fused_ordering(29) 00:12:23.346 fused_ordering(30) 00:12:23.346 fused_ordering(31) 00:12:23.346 fused_ordering(32) 00:12:23.346 fused_ordering(33) 00:12:23.346 fused_ordering(34) 00:12:23.346 fused_ordering(35) 00:12:23.346 fused_ordering(36) 00:12:23.346 fused_ordering(37) 00:12:23.346 fused_ordering(38) 00:12:23.346 fused_ordering(39) 00:12:23.346 fused_ordering(40) 00:12:23.346 fused_ordering(41) 00:12:23.346 fused_ordering(42) 00:12:23.346 fused_ordering(43) 00:12:23.346 fused_ordering(44) 00:12:23.346 fused_ordering(45) 00:12:23.346 fused_ordering(46) 00:12:23.346 fused_ordering(47) 00:12:23.346 fused_ordering(48) 00:12:23.346 fused_ordering(49) 00:12:23.346 fused_ordering(50) 00:12:23.346 fused_ordering(51) 00:12:23.346 fused_ordering(52) 00:12:23.346 fused_ordering(53) 00:12:23.346 fused_ordering(54) 00:12:23.346 fused_ordering(55) 00:12:23.346 fused_ordering(56) 00:12:23.346 fused_ordering(57) 00:12:23.346 fused_ordering(58) 00:12:23.346 fused_ordering(59) 00:12:23.346 fused_ordering(60) 00:12:23.346 fused_ordering(61) 00:12:23.346 fused_ordering(62) 00:12:23.346 fused_ordering(63) 00:12:23.346 fused_ordering(64) 00:12:23.346 fused_ordering(65) 00:12:23.346 fused_ordering(66) 00:12:23.346 fused_ordering(67) 00:12:23.346 fused_ordering(68) 00:12:23.346 fused_ordering(69) 00:12:23.346 fused_ordering(70) 00:12:23.346 fused_ordering(71) 00:12:23.346 fused_ordering(72) 00:12:23.346 fused_ordering(73) 00:12:23.346 fused_ordering(74) 00:12:23.346 fused_ordering(75) 00:12:23.346 fused_ordering(76) 00:12:23.346 fused_ordering(77) 00:12:23.346 fused_ordering(78) 00:12:23.346 fused_ordering(79) 00:12:23.346 fused_ordering(80) 00:12:23.346 fused_ordering(81) 00:12:23.346 fused_ordering(82) 00:12:23.346 fused_ordering(83) 00:12:23.346 fused_ordering(84) 00:12:23.346 fused_ordering(85) 00:12:23.346 fused_ordering(86) 00:12:23.346 fused_ordering(87) 00:12:23.347 fused_ordering(88) 00:12:23.347 fused_ordering(89) 00:12:23.347 fused_ordering(90) 00:12:23.347 fused_ordering(91) 00:12:23.347 fused_ordering(92) 00:12:23.347 fused_ordering(93) 00:12:23.347 fused_ordering(94) 00:12:23.347 fused_ordering(95) 00:12:23.347 fused_ordering(96) 00:12:23.347 fused_ordering(97) 00:12:23.347 fused_ordering(98) 00:12:23.347 fused_ordering(99) 00:12:23.347 fused_ordering(100) 00:12:23.347 fused_ordering(101) 00:12:23.347 fused_ordering(102) 00:12:23.347 fused_ordering(103) 00:12:23.347 fused_ordering(104) 00:12:23.347 fused_ordering(105) 00:12:23.347 fused_ordering(106) 00:12:23.347 fused_ordering(107) 00:12:23.347 fused_ordering(108) 00:12:23.347 fused_ordering(109) 00:12:23.347 fused_ordering(110) 00:12:23.347 fused_ordering(111) 00:12:23.347 fused_ordering(112) 00:12:23.347 fused_ordering(113) 00:12:23.347 fused_ordering(114) 00:12:23.347 fused_ordering(115) 00:12:23.347 fused_ordering(116) 00:12:23.347 fused_ordering(117) 00:12:23.347 fused_ordering(118) 00:12:23.347 fused_ordering(119) 00:12:23.347 fused_ordering(120) 00:12:23.347 fused_ordering(121) 00:12:23.347 fused_ordering(122) 00:12:23.347 fused_ordering(123) 00:12:23.347 fused_ordering(124) 00:12:23.347 fused_ordering(125) 00:12:23.347 fused_ordering(126) 00:12:23.347 fused_ordering(127) 00:12:23.347 fused_ordering(128) 00:12:23.347 fused_ordering(129) 00:12:23.347 fused_ordering(130) 00:12:23.347 fused_ordering(131) 00:12:23.347 fused_ordering(132) 00:12:23.347 fused_ordering(133) 00:12:23.347 fused_ordering(134) 00:12:23.347 fused_ordering(135) 00:12:23.347 fused_ordering(136) 00:12:23.347 fused_ordering(137) 00:12:23.347 fused_ordering(138) 00:12:23.347 fused_ordering(139) 00:12:23.347 fused_ordering(140) 00:12:23.347 fused_ordering(141) 00:12:23.347 fused_ordering(142) 00:12:23.347 fused_ordering(143) 00:12:23.347 fused_ordering(144) 00:12:23.347 fused_ordering(145) 00:12:23.347 fused_ordering(146) 00:12:23.347 fused_ordering(147) 00:12:23.347 fused_ordering(148) 00:12:23.347 fused_ordering(149) 00:12:23.347 fused_ordering(150) 00:12:23.347 fused_ordering(151) 00:12:23.347 fused_ordering(152) 00:12:23.347 fused_ordering(153) 00:12:23.347 fused_ordering(154) 00:12:23.347 fused_ordering(155) 00:12:23.347 fused_ordering(156) 00:12:23.347 fused_ordering(157) 00:12:23.347 fused_ordering(158) 00:12:23.347 fused_ordering(159) 00:12:23.347 fused_ordering(160) 00:12:23.347 fused_ordering(161) 00:12:23.347 fused_ordering(162) 00:12:23.347 fused_ordering(163) 00:12:23.347 fused_ordering(164) 00:12:23.347 fused_ordering(165) 00:12:23.347 fused_ordering(166) 00:12:23.347 fused_ordering(167) 00:12:23.347 fused_ordering(168) 00:12:23.347 fused_ordering(169) 00:12:23.347 fused_ordering(170) 00:12:23.347 fused_ordering(171) 00:12:23.347 fused_ordering(172) 00:12:23.347 fused_ordering(173) 00:12:23.347 fused_ordering(174) 00:12:23.347 fused_ordering(175) 00:12:23.347 fused_ordering(176) 00:12:23.347 fused_ordering(177) 00:12:23.347 fused_ordering(178) 00:12:23.347 fused_ordering(179) 00:12:23.347 fused_ordering(180) 00:12:23.347 fused_ordering(181) 00:12:23.347 fused_ordering(182) 00:12:23.347 fused_ordering(183) 00:12:23.347 fused_ordering(184) 00:12:23.347 fused_ordering(185) 00:12:23.347 fused_ordering(186) 00:12:23.347 fused_ordering(187) 00:12:23.347 fused_ordering(188) 00:12:23.347 fused_ordering(189) 00:12:23.347 fused_ordering(190) 00:12:23.347 fused_ordering(191) 00:12:23.347 fused_ordering(192) 00:12:23.347 fused_ordering(193) 00:12:23.347 fused_ordering(194) 00:12:23.347 fused_ordering(195) 00:12:23.347 fused_ordering(196) 00:12:23.347 fused_ordering(197) 00:12:23.347 fused_ordering(198) 00:12:23.347 fused_ordering(199) 00:12:23.347 fused_ordering(200) 00:12:23.347 fused_ordering(201) 00:12:23.347 fused_ordering(202) 00:12:23.347 fused_ordering(203) 00:12:23.347 fused_ordering(204) 00:12:23.347 fused_ordering(205) 00:12:23.607 fused_ordering(206) 00:12:23.607 fused_ordering(207) 00:12:23.607 fused_ordering(208) 00:12:23.607 fused_ordering(209) 00:12:23.607 fused_ordering(210) 00:12:23.607 fused_ordering(211) 00:12:23.607 fused_ordering(212) 00:12:23.607 fused_ordering(213) 00:12:23.607 fused_ordering(214) 00:12:23.607 fused_ordering(215) 00:12:23.607 fused_ordering(216) 00:12:23.607 fused_ordering(217) 00:12:23.607 fused_ordering(218) 00:12:23.607 fused_ordering(219) 00:12:23.607 fused_ordering(220) 00:12:23.607 fused_ordering(221) 00:12:23.607 fused_ordering(222) 00:12:23.607 fused_ordering(223) 00:12:23.607 fused_ordering(224) 00:12:23.607 fused_ordering(225) 00:12:23.607 fused_ordering(226) 00:12:23.607 fused_ordering(227) 00:12:23.607 fused_ordering(228) 00:12:23.607 fused_ordering(229) 00:12:23.607 fused_ordering(230) 00:12:23.607 fused_ordering(231) 00:12:23.607 fused_ordering(232) 00:12:23.607 fused_ordering(233) 00:12:23.607 fused_ordering(234) 00:12:23.607 fused_ordering(235) 00:12:23.607 fused_ordering(236) 00:12:23.607 fused_ordering(237) 00:12:23.607 fused_ordering(238) 00:12:23.607 fused_ordering(239) 00:12:23.607 fused_ordering(240) 00:12:23.607 fused_ordering(241) 00:12:23.607 fused_ordering(242) 00:12:23.607 fused_ordering(243) 00:12:23.607 fused_ordering(244) 00:12:23.607 fused_ordering(245) 00:12:23.607 fused_ordering(246) 00:12:23.607 fused_ordering(247) 00:12:23.607 fused_ordering(248) 00:12:23.607 fused_ordering(249) 00:12:23.607 fused_ordering(250) 00:12:23.607 fused_ordering(251) 00:12:23.607 fused_ordering(252) 00:12:23.607 fused_ordering(253) 00:12:23.607 fused_ordering(254) 00:12:23.607 fused_ordering(255) 00:12:23.607 fused_ordering(256) 00:12:23.607 fused_ordering(257) 00:12:23.607 fused_ordering(258) 00:12:23.607 fused_ordering(259) 00:12:23.607 fused_ordering(260) 00:12:23.607 fused_ordering(261) 00:12:23.608 fused_ordering(262) 00:12:23.608 fused_ordering(263) 00:12:23.608 fused_ordering(264) 00:12:23.608 fused_ordering(265) 00:12:23.608 fused_ordering(266) 00:12:23.608 fused_ordering(267) 00:12:23.608 fused_ordering(268) 00:12:23.608 fused_ordering(269) 00:12:23.608 fused_ordering(270) 00:12:23.608 fused_ordering(271) 00:12:23.608 fused_ordering(272) 00:12:23.608 fused_ordering(273) 00:12:23.608 fused_ordering(274) 00:12:23.608 fused_ordering(275) 00:12:23.608 fused_ordering(276) 00:12:23.608 fused_ordering(277) 00:12:23.608 fused_ordering(278) 00:12:23.608 fused_ordering(279) 00:12:23.608 fused_ordering(280) 00:12:23.608 fused_ordering(281) 00:12:23.608 fused_ordering(282) 00:12:23.608 fused_ordering(283) 00:12:23.608 fused_ordering(284) 00:12:23.608 fused_ordering(285) 00:12:23.608 fused_ordering(286) 00:12:23.608 fused_ordering(287) 00:12:23.608 fused_ordering(288) 00:12:23.608 fused_ordering(289) 00:12:23.608 fused_ordering(290) 00:12:23.608 fused_ordering(291) 00:12:23.608 fused_ordering(292) 00:12:23.608 fused_ordering(293) 00:12:23.608 fused_ordering(294) 00:12:23.608 fused_ordering(295) 00:12:23.608 fused_ordering(296) 00:12:23.608 fused_ordering(297) 00:12:23.608 fused_ordering(298) 00:12:23.608 fused_ordering(299) 00:12:23.608 fused_ordering(300) 00:12:23.608 fused_ordering(301) 00:12:23.608 fused_ordering(302) 00:12:23.608 fused_ordering(303) 00:12:23.608 fused_ordering(304) 00:12:23.608 fused_ordering(305) 00:12:23.608 fused_ordering(306) 00:12:23.608 fused_ordering(307) 00:12:23.608 fused_ordering(308) 00:12:23.608 fused_ordering(309) 00:12:23.608 fused_ordering(310) 00:12:23.608 fused_ordering(311) 00:12:23.608 fused_ordering(312) 00:12:23.608 fused_ordering(313) 00:12:23.608 fused_ordering(314) 00:12:23.608 fused_ordering(315) 00:12:23.608 fused_ordering(316) 00:12:23.608 fused_ordering(317) 00:12:23.608 fused_ordering(318) 00:12:23.608 fused_ordering(319) 00:12:23.608 fused_ordering(320) 00:12:23.608 fused_ordering(321) 00:12:23.608 fused_ordering(322) 00:12:23.608 fused_ordering(323) 00:12:23.608 fused_ordering(324) 00:12:23.608 fused_ordering(325) 00:12:23.608 fused_ordering(326) 00:12:23.608 fused_ordering(327) 00:12:23.608 fused_ordering(328) 00:12:23.608 fused_ordering(329) 00:12:23.608 fused_ordering(330) 00:12:23.608 fused_ordering(331) 00:12:23.608 fused_ordering(332) 00:12:23.608 fused_ordering(333) 00:12:23.608 fused_ordering(334) 00:12:23.608 fused_ordering(335) 00:12:23.608 fused_ordering(336) 00:12:23.608 fused_ordering(337) 00:12:23.608 fused_ordering(338) 00:12:23.608 fused_ordering(339) 00:12:23.608 fused_ordering(340) 00:12:23.608 fused_ordering(341) 00:12:23.608 fused_ordering(342) 00:12:23.608 fused_ordering(343) 00:12:23.608 fused_ordering(344) 00:12:23.608 fused_ordering(345) 00:12:23.608 fused_ordering(346) 00:12:23.608 fused_ordering(347) 00:12:23.608 fused_ordering(348) 00:12:23.608 fused_ordering(349) 00:12:23.608 fused_ordering(350) 00:12:23.608 fused_ordering(351) 00:12:23.608 fused_ordering(352) 00:12:23.608 fused_ordering(353) 00:12:23.608 fused_ordering(354) 00:12:23.608 fused_ordering(355) 00:12:23.608 fused_ordering(356) 00:12:23.608 fused_ordering(357) 00:12:23.608 fused_ordering(358) 00:12:23.608 fused_ordering(359) 00:12:23.608 fused_ordering(360) 00:12:23.608 fused_ordering(361) 00:12:23.608 fused_ordering(362) 00:12:23.608 fused_ordering(363) 00:12:23.608 fused_ordering(364) 00:12:23.608 fused_ordering(365) 00:12:23.608 fused_ordering(366) 00:12:23.608 fused_ordering(367) 00:12:23.608 fused_ordering(368) 00:12:23.608 fused_ordering(369) 00:12:23.608 fused_ordering(370) 00:12:23.608 fused_ordering(371) 00:12:23.608 fused_ordering(372) 00:12:23.608 fused_ordering(373) 00:12:23.608 fused_ordering(374) 00:12:23.608 fused_ordering(375) 00:12:23.608 fused_ordering(376) 00:12:23.608 fused_ordering(377) 00:12:23.608 fused_ordering(378) 00:12:23.608 fused_ordering(379) 00:12:23.608 fused_ordering(380) 00:12:23.608 fused_ordering(381) 00:12:23.608 fused_ordering(382) 00:12:23.608 fused_ordering(383) 00:12:23.608 fused_ordering(384) 00:12:23.608 fused_ordering(385) 00:12:23.608 fused_ordering(386) 00:12:23.608 fused_ordering(387) 00:12:23.608 fused_ordering(388) 00:12:23.608 fused_ordering(389) 00:12:23.608 fused_ordering(390) 00:12:23.608 fused_ordering(391) 00:12:23.608 fused_ordering(392) 00:12:23.608 fused_ordering(393) 00:12:23.608 fused_ordering(394) 00:12:23.608 fused_ordering(395) 00:12:23.608 fused_ordering(396) 00:12:23.608 fused_ordering(397) 00:12:23.608 fused_ordering(398) 00:12:23.608 fused_ordering(399) 00:12:23.608 fused_ordering(400) 00:12:23.608 fused_ordering(401) 00:12:23.608 fused_ordering(402) 00:12:23.608 fused_ordering(403) 00:12:23.608 fused_ordering(404) 00:12:23.608 fused_ordering(405) 00:12:23.608 fused_ordering(406) 00:12:23.608 fused_ordering(407) 00:12:23.608 fused_ordering(408) 00:12:23.608 fused_ordering(409) 00:12:23.608 fused_ordering(410) 00:12:23.869 fused_ordering(411) 00:12:23.869 fused_ordering(412) 00:12:23.869 fused_ordering(413) 00:12:23.869 fused_ordering(414) 00:12:23.869 fused_ordering(415) 00:12:23.869 fused_ordering(416) 00:12:23.869 fused_ordering(417) 00:12:23.869 fused_ordering(418) 00:12:23.869 fused_ordering(419) 00:12:23.869 fused_ordering(420) 00:12:23.869 fused_ordering(421) 00:12:23.869 fused_ordering(422) 00:12:23.869 fused_ordering(423) 00:12:23.869 fused_ordering(424) 00:12:23.869 fused_ordering(425) 00:12:23.869 fused_ordering(426) 00:12:23.869 fused_ordering(427) 00:12:23.869 fused_ordering(428) 00:12:23.869 fused_ordering(429) 00:12:23.869 fused_ordering(430) 00:12:23.869 fused_ordering(431) 00:12:23.869 fused_ordering(432) 00:12:23.869 fused_ordering(433) 00:12:23.869 fused_ordering(434) 00:12:23.869 fused_ordering(435) 00:12:23.869 fused_ordering(436) 00:12:23.869 fused_ordering(437) 00:12:23.869 fused_ordering(438) 00:12:23.869 fused_ordering(439) 00:12:23.869 fused_ordering(440) 00:12:23.869 fused_ordering(441) 00:12:23.869 fused_ordering(442) 00:12:23.869 fused_ordering(443) 00:12:23.869 fused_ordering(444) 00:12:23.869 fused_ordering(445) 00:12:23.869 fused_ordering(446) 00:12:23.869 fused_ordering(447) 00:12:23.869 fused_ordering(448) 00:12:23.869 fused_ordering(449) 00:12:23.869 fused_ordering(450) 00:12:23.869 fused_ordering(451) 00:12:23.869 fused_ordering(452) 00:12:23.869 fused_ordering(453) 00:12:23.869 fused_ordering(454) 00:12:23.869 fused_ordering(455) 00:12:23.869 fused_ordering(456) 00:12:23.869 fused_ordering(457) 00:12:23.869 fused_ordering(458) 00:12:23.869 fused_ordering(459) 00:12:23.869 fused_ordering(460) 00:12:23.869 fused_ordering(461) 00:12:23.869 fused_ordering(462) 00:12:23.869 fused_ordering(463) 00:12:23.869 fused_ordering(464) 00:12:23.869 fused_ordering(465) 00:12:23.869 fused_ordering(466) 00:12:23.869 fused_ordering(467) 00:12:23.869 fused_ordering(468) 00:12:23.869 fused_ordering(469) 00:12:23.869 fused_ordering(470) 00:12:23.869 fused_ordering(471) 00:12:23.869 fused_ordering(472) 00:12:23.869 fused_ordering(473) 00:12:23.869 fused_ordering(474) 00:12:23.869 fused_ordering(475) 00:12:23.869 fused_ordering(476) 00:12:23.869 fused_ordering(477) 00:12:23.869 fused_ordering(478) 00:12:23.869 fused_ordering(479) 00:12:23.869 fused_ordering(480) 00:12:23.869 fused_ordering(481) 00:12:23.869 fused_ordering(482) 00:12:23.869 fused_ordering(483) 00:12:23.869 fused_ordering(484) 00:12:23.869 fused_ordering(485) 00:12:23.869 fused_ordering(486) 00:12:23.869 fused_ordering(487) 00:12:23.869 fused_ordering(488) 00:12:23.869 fused_ordering(489) 00:12:23.869 fused_ordering(490) 00:12:23.869 fused_ordering(491) 00:12:23.869 fused_ordering(492) 00:12:23.869 fused_ordering(493) 00:12:23.869 fused_ordering(494) 00:12:23.869 fused_ordering(495) 00:12:23.869 fused_ordering(496) 00:12:23.869 fused_ordering(497) 00:12:23.869 fused_ordering(498) 00:12:23.869 fused_ordering(499) 00:12:23.869 fused_ordering(500) 00:12:23.869 fused_ordering(501) 00:12:23.869 fused_ordering(502) 00:12:23.869 fused_ordering(503) 00:12:23.869 fused_ordering(504) 00:12:23.869 fused_ordering(505) 00:12:23.869 fused_ordering(506) 00:12:23.869 fused_ordering(507) 00:12:23.869 fused_ordering(508) 00:12:23.869 fused_ordering(509) 00:12:23.869 fused_ordering(510) 00:12:23.869 fused_ordering(511) 00:12:23.869 fused_ordering(512) 00:12:23.869 fused_ordering(513) 00:12:23.869 fused_ordering(514) 00:12:23.869 fused_ordering(515) 00:12:23.869 fused_ordering(516) 00:12:23.869 fused_ordering(517) 00:12:23.869 fused_ordering(518) 00:12:23.869 fused_ordering(519) 00:12:23.869 fused_ordering(520) 00:12:23.869 fused_ordering(521) 00:12:23.869 fused_ordering(522) 00:12:23.869 fused_ordering(523) 00:12:23.869 fused_ordering(524) 00:12:23.869 fused_ordering(525) 00:12:23.869 fused_ordering(526) 00:12:23.869 fused_ordering(527) 00:12:23.869 fused_ordering(528) 00:12:23.869 fused_ordering(529) 00:12:23.869 fused_ordering(530) 00:12:23.869 fused_ordering(531) 00:12:23.869 fused_ordering(532) 00:12:23.869 fused_ordering(533) 00:12:23.869 fused_ordering(534) 00:12:23.869 fused_ordering(535) 00:12:23.869 fused_ordering(536) 00:12:23.869 fused_ordering(537) 00:12:23.869 fused_ordering(538) 00:12:23.869 fused_ordering(539) 00:12:23.869 fused_ordering(540) 00:12:23.869 fused_ordering(541) 00:12:23.869 fused_ordering(542) 00:12:23.869 fused_ordering(543) 00:12:23.869 fused_ordering(544) 00:12:23.869 fused_ordering(545) 00:12:23.869 fused_ordering(546) 00:12:23.869 fused_ordering(547) 00:12:23.869 fused_ordering(548) 00:12:23.869 fused_ordering(549) 00:12:23.869 fused_ordering(550) 00:12:23.869 fused_ordering(551) 00:12:23.869 fused_ordering(552) 00:12:23.869 fused_ordering(553) 00:12:23.869 fused_ordering(554) 00:12:23.869 fused_ordering(555) 00:12:23.869 fused_ordering(556) 00:12:23.869 fused_ordering(557) 00:12:23.870 fused_ordering(558) 00:12:23.870 fused_ordering(559) 00:12:23.870 fused_ordering(560) 00:12:23.870 fused_ordering(561) 00:12:23.870 fused_ordering(562) 00:12:23.870 fused_ordering(563) 00:12:23.870 fused_ordering(564) 00:12:23.870 fused_ordering(565) 00:12:23.870 fused_ordering(566) 00:12:23.870 fused_ordering(567) 00:12:23.870 fused_ordering(568) 00:12:23.870 fused_ordering(569) 00:12:23.870 fused_ordering(570) 00:12:23.870 fused_ordering(571) 00:12:23.870 fused_ordering(572) 00:12:23.870 fused_ordering(573) 00:12:23.870 fused_ordering(574) 00:12:23.870 fused_ordering(575) 00:12:23.870 fused_ordering(576) 00:12:23.870 fused_ordering(577) 00:12:23.870 fused_ordering(578) 00:12:23.870 fused_ordering(579) 00:12:23.870 fused_ordering(580) 00:12:23.870 fused_ordering(581) 00:12:23.870 fused_ordering(582) 00:12:23.870 fused_ordering(583) 00:12:23.870 fused_ordering(584) 00:12:23.870 fused_ordering(585) 00:12:23.870 fused_ordering(586) 00:12:23.870 fused_ordering(587) 00:12:23.870 fused_ordering(588) 00:12:23.870 fused_ordering(589) 00:12:23.870 fused_ordering(590) 00:12:23.870 fused_ordering(591) 00:12:23.870 fused_ordering(592) 00:12:23.870 fused_ordering(593) 00:12:23.870 fused_ordering(594) 00:12:23.870 fused_ordering(595) 00:12:23.870 fused_ordering(596) 00:12:23.870 fused_ordering(597) 00:12:23.870 fused_ordering(598) 00:12:23.870 fused_ordering(599) 00:12:23.870 fused_ordering(600) 00:12:23.870 fused_ordering(601) 00:12:23.870 fused_ordering(602) 00:12:23.870 fused_ordering(603) 00:12:23.870 fused_ordering(604) 00:12:23.870 fused_ordering(605) 00:12:23.870 fused_ordering(606) 00:12:23.870 fused_ordering(607) 00:12:23.870 fused_ordering(608) 00:12:23.870 fused_ordering(609) 00:12:23.870 fused_ordering(610) 00:12:23.870 fused_ordering(611) 00:12:23.870 fused_ordering(612) 00:12:23.870 fused_ordering(613) 00:12:23.870 fused_ordering(614) 00:12:23.870 fused_ordering(615) 00:12:24.438 fused_ordering(616) 00:12:24.438 fused_ordering(617) 00:12:24.438 fused_ordering(618) 00:12:24.438 fused_ordering(619) 00:12:24.438 fused_ordering(620) 00:12:24.438 fused_ordering(621) 00:12:24.438 fused_ordering(622) 00:12:24.438 fused_ordering(623) 00:12:24.438 fused_ordering(624) 00:12:24.438 fused_ordering(625) 00:12:24.438 fused_ordering(626) 00:12:24.438 fused_ordering(627) 00:12:24.438 fused_ordering(628) 00:12:24.438 fused_ordering(629) 00:12:24.438 fused_ordering(630) 00:12:24.438 fused_ordering(631) 00:12:24.438 fused_ordering(632) 00:12:24.438 fused_ordering(633) 00:12:24.438 fused_ordering(634) 00:12:24.438 fused_ordering(635) 00:12:24.438 fused_ordering(636) 00:12:24.438 fused_ordering(637) 00:12:24.438 fused_ordering(638) 00:12:24.438 fused_ordering(639) 00:12:24.438 fused_ordering(640) 00:12:24.438 fused_ordering(641) 00:12:24.438 fused_ordering(642) 00:12:24.438 fused_ordering(643) 00:12:24.438 fused_ordering(644) 00:12:24.438 fused_ordering(645) 00:12:24.438 fused_ordering(646) 00:12:24.438 fused_ordering(647) 00:12:24.438 fused_ordering(648) 00:12:24.438 fused_ordering(649) 00:12:24.438 fused_ordering(650) 00:12:24.438 fused_ordering(651) 00:12:24.438 fused_ordering(652) 00:12:24.438 fused_ordering(653) 00:12:24.438 fused_ordering(654) 00:12:24.438 fused_ordering(655) 00:12:24.438 fused_ordering(656) 00:12:24.438 fused_ordering(657) 00:12:24.438 fused_ordering(658) 00:12:24.438 fused_ordering(659) 00:12:24.438 fused_ordering(660) 00:12:24.438 fused_ordering(661) 00:12:24.438 fused_ordering(662) 00:12:24.438 fused_ordering(663) 00:12:24.438 fused_ordering(664) 00:12:24.438 fused_ordering(665) 00:12:24.438 fused_ordering(666) 00:12:24.438 fused_ordering(667) 00:12:24.438 fused_ordering(668) 00:12:24.438 fused_ordering(669) 00:12:24.438 fused_ordering(670) 00:12:24.438 fused_ordering(671) 00:12:24.438 fused_ordering(672) 00:12:24.438 fused_ordering(673) 00:12:24.438 fused_ordering(674) 00:12:24.438 fused_ordering(675) 00:12:24.438 fused_ordering(676) 00:12:24.438 fused_ordering(677) 00:12:24.438 fused_ordering(678) 00:12:24.438 fused_ordering(679) 00:12:24.438 fused_ordering(680) 00:12:24.438 fused_ordering(681) 00:12:24.438 fused_ordering(682) 00:12:24.438 fused_ordering(683) 00:12:24.438 fused_ordering(684) 00:12:24.438 fused_ordering(685) 00:12:24.438 fused_ordering(686) 00:12:24.438 fused_ordering(687) 00:12:24.438 fused_ordering(688) 00:12:24.438 fused_ordering(689) 00:12:24.438 fused_ordering(690) 00:12:24.438 fused_ordering(691) 00:12:24.438 fused_ordering(692) 00:12:24.438 fused_ordering(693) 00:12:24.438 fused_ordering(694) 00:12:24.438 fused_ordering(695) 00:12:24.438 fused_ordering(696) 00:12:24.438 fused_ordering(697) 00:12:24.438 fused_ordering(698) 00:12:24.438 fused_ordering(699) 00:12:24.438 fused_ordering(700) 00:12:24.438 fused_ordering(701) 00:12:24.438 fused_ordering(702) 00:12:24.438 fused_ordering(703) 00:12:24.438 fused_ordering(704) 00:12:24.438 fused_ordering(705) 00:12:24.438 fused_ordering(706) 00:12:24.438 fused_ordering(707) 00:12:24.438 fused_ordering(708) 00:12:24.438 fused_ordering(709) 00:12:24.438 fused_ordering(710) 00:12:24.438 fused_ordering(711) 00:12:24.438 fused_ordering(712) 00:12:24.438 fused_ordering(713) 00:12:24.438 fused_ordering(714) 00:12:24.438 fused_ordering(715) 00:12:24.438 fused_ordering(716) 00:12:24.438 fused_ordering(717) 00:12:24.438 fused_ordering(718) 00:12:24.438 fused_ordering(719) 00:12:24.438 fused_ordering(720) 00:12:24.438 fused_ordering(721) 00:12:24.438 fused_ordering(722) 00:12:24.438 fused_ordering(723) 00:12:24.438 fused_ordering(724) 00:12:24.438 fused_ordering(725) 00:12:24.438 fused_ordering(726) 00:12:24.438 fused_ordering(727) 00:12:24.438 fused_ordering(728) 00:12:24.438 fused_ordering(729) 00:12:24.438 fused_ordering(730) 00:12:24.438 fused_ordering(731) 00:12:24.438 fused_ordering(732) 00:12:24.438 fused_ordering(733) 00:12:24.438 fused_ordering(734) 00:12:24.438 fused_ordering(735) 00:12:24.438 fused_ordering(736) 00:12:24.438 fused_ordering(737) 00:12:24.438 fused_ordering(738) 00:12:24.438 fused_ordering(739) 00:12:24.438 fused_ordering(740) 00:12:24.438 fused_ordering(741) 00:12:24.438 fused_ordering(742) 00:12:24.438 fused_ordering(743) 00:12:24.438 fused_ordering(744) 00:12:24.438 fused_ordering(745) 00:12:24.438 fused_ordering(746) 00:12:24.438 fused_ordering(747) 00:12:24.438 fused_ordering(748) 00:12:24.438 fused_ordering(749) 00:12:24.438 fused_ordering(750) 00:12:24.438 fused_ordering(751) 00:12:24.438 fused_ordering(752) 00:12:24.438 fused_ordering(753) 00:12:24.438 fused_ordering(754) 00:12:24.438 fused_ordering(755) 00:12:24.438 fused_ordering(756) 00:12:24.438 fused_ordering(757) 00:12:24.438 fused_ordering(758) 00:12:24.438 fused_ordering(759) 00:12:24.438 fused_ordering(760) 00:12:24.438 fused_ordering(761) 00:12:24.438 fused_ordering(762) 00:12:24.438 fused_ordering(763) 00:12:24.438 fused_ordering(764) 00:12:24.438 fused_ordering(765) 00:12:24.438 fused_ordering(766) 00:12:24.438 fused_ordering(767) 00:12:24.438 fused_ordering(768) 00:12:24.438 fused_ordering(769) 00:12:24.438 fused_ordering(770) 00:12:24.439 fused_ordering(771) 00:12:24.439 fused_ordering(772) 00:12:24.439 fused_ordering(773) 00:12:24.439 fused_ordering(774) 00:12:24.439 fused_ordering(775) 00:12:24.439 fused_ordering(776) 00:12:24.439 fused_ordering(777) 00:12:24.439 fused_ordering(778) 00:12:24.439 fused_ordering(779) 00:12:24.439 fused_ordering(780) 00:12:24.439 fused_ordering(781) 00:12:24.439 fused_ordering(782) 00:12:24.439 fused_ordering(783) 00:12:24.439 fused_ordering(784) 00:12:24.439 fused_ordering(785) 00:12:24.439 fused_ordering(786) 00:12:24.439 fused_ordering(787) 00:12:24.439 fused_ordering(788) 00:12:24.439 fused_ordering(789) 00:12:24.439 fused_ordering(790) 00:12:24.439 fused_ordering(791) 00:12:24.439 fused_ordering(792) 00:12:24.439 fused_ordering(793) 00:12:24.439 fused_ordering(794) 00:12:24.439 fused_ordering(795) 00:12:24.439 fused_ordering(796) 00:12:24.439 fused_ordering(797) 00:12:24.439 fused_ordering(798) 00:12:24.439 fused_ordering(799) 00:12:24.439 fused_ordering(800) 00:12:24.439 fused_ordering(801) 00:12:24.439 fused_ordering(802) 00:12:24.439 fused_ordering(803) 00:12:24.439 fused_ordering(804) 00:12:24.439 fused_ordering(805) 00:12:24.439 fused_ordering(806) 00:12:24.439 fused_ordering(807) 00:12:24.439 fused_ordering(808) 00:12:24.439 fused_ordering(809) 00:12:24.439 fused_ordering(810) 00:12:24.439 fused_ordering(811) 00:12:24.439 fused_ordering(812) 00:12:24.439 fused_ordering(813) 00:12:24.439 fused_ordering(814) 00:12:24.439 fused_ordering(815) 00:12:24.439 fused_ordering(816) 00:12:24.439 fused_ordering(817) 00:12:24.439 fused_ordering(818) 00:12:24.439 fused_ordering(819) 00:12:24.439 fused_ordering(820) 00:12:24.697 fused_ordering(821) 00:12:24.698 fused_ordering(822) 00:12:24.698 fused_ordering(823) 00:12:24.698 fused_ordering(824) 00:12:24.698 fused_ordering(825) 00:12:24.698 fused_ordering(826) 00:12:24.698 fused_ordering(827) 00:12:24.698 fused_ordering(828) 00:12:24.698 fused_ordering(829) 00:12:24.698 fused_ordering(830) 00:12:24.698 fused_ordering(831) 00:12:24.698 fused_ordering(832) 00:12:24.698 fused_ordering(833) 00:12:24.698 fused_ordering(834) 00:12:24.698 fused_ordering(835) 00:12:24.698 fused_ordering(836) 00:12:24.698 fused_ordering(837) 00:12:24.698 fused_ordering(838) 00:12:24.698 fused_ordering(839) 00:12:24.698 fused_ordering(840) 00:12:24.698 fused_ordering(841) 00:12:24.698 fused_ordering(842) 00:12:24.698 fused_ordering(843) 00:12:24.698 fused_ordering(844) 00:12:24.698 fused_ordering(845) 00:12:24.698 fused_ordering(846) 00:12:24.698 fused_ordering(847) 00:12:24.698 fused_ordering(848) 00:12:24.698 fused_ordering(849) 00:12:24.698 fused_ordering(850) 00:12:24.698 fused_ordering(851) 00:12:24.698 fused_ordering(852) 00:12:24.698 fused_ordering(853) 00:12:24.698 fused_ordering(854) 00:12:24.698 fused_ordering(855) 00:12:24.698 fused_ordering(856) 00:12:24.698 fused_ordering(857) 00:12:24.698 fused_ordering(858) 00:12:24.698 fused_ordering(859) 00:12:24.698 fused_ordering(860) 00:12:24.698 fused_ordering(861) 00:12:24.698 fused_ordering(862) 00:12:24.698 fused_ordering(863) 00:12:24.698 fused_ordering(864) 00:12:24.698 fused_ordering(865) 00:12:24.698 fused_ordering(866) 00:12:24.698 fused_ordering(867) 00:12:24.698 fused_ordering(868) 00:12:24.698 fused_ordering(869) 00:12:24.698 fused_ordering(870) 00:12:24.698 fused_ordering(871) 00:12:24.698 fused_ordering(872) 00:12:24.698 fused_ordering(873) 00:12:24.698 fused_ordering(874) 00:12:24.698 fused_ordering(875) 00:12:24.698 fused_ordering(876) 00:12:24.698 fused_ordering(877) 00:12:24.698 fused_ordering(878) 00:12:24.698 fused_ordering(879) 00:12:24.698 fused_ordering(880) 00:12:24.698 fused_ordering(881) 00:12:24.698 fused_ordering(882) 00:12:24.698 fused_ordering(883) 00:12:24.698 fused_ordering(884) 00:12:24.698 fused_ordering(885) 00:12:24.698 fused_ordering(886) 00:12:24.698 fused_ordering(887) 00:12:24.698 fused_ordering(888) 00:12:24.698 fused_ordering(889) 00:12:24.698 fused_ordering(890) 00:12:24.698 fused_ordering(891) 00:12:24.698 fused_ordering(892) 00:12:24.698 fused_ordering(893) 00:12:24.698 fused_ordering(894) 00:12:24.698 fused_ordering(895) 00:12:24.698 fused_ordering(896) 00:12:24.698 fused_ordering(897) 00:12:24.698 fused_ordering(898) 00:12:24.698 fused_ordering(899) 00:12:24.698 fused_ordering(900) 00:12:24.698 fused_ordering(901) 00:12:24.698 fused_ordering(902) 00:12:24.698 fused_ordering(903) 00:12:24.698 fused_ordering(904) 00:12:24.698 fused_ordering(905) 00:12:24.698 fused_ordering(906) 00:12:24.698 fused_ordering(907) 00:12:24.698 fused_ordering(908) 00:12:24.698 fused_ordering(909) 00:12:24.698 fused_ordering(910) 00:12:24.698 fused_ordering(911) 00:12:24.698 fused_ordering(912) 00:12:24.698 fused_ordering(913) 00:12:24.698 fused_ordering(914) 00:12:24.698 fused_ordering(915) 00:12:24.698 fused_ordering(916) 00:12:24.698 fused_ordering(917) 00:12:24.698 fused_ordering(918) 00:12:24.698 fused_ordering(919) 00:12:24.698 fused_ordering(920) 00:12:24.698 fused_ordering(921) 00:12:24.698 fused_ordering(922) 00:12:24.698 fused_ordering(923) 00:12:24.698 fused_ordering(924) 00:12:24.698 fused_ordering(925) 00:12:24.698 fused_ordering(926) 00:12:24.698 fused_ordering(927) 00:12:24.698 fused_ordering(928) 00:12:24.698 fused_ordering(929) 00:12:24.698 fused_ordering(930) 00:12:24.698 fused_ordering(931) 00:12:24.698 fused_ordering(932) 00:12:24.698 fused_ordering(933) 00:12:24.698 fused_ordering(934) 00:12:24.698 fused_ordering(935) 00:12:24.698 fused_ordering(936) 00:12:24.698 fused_ordering(937) 00:12:24.698 fused_ordering(938) 00:12:24.698 fused_ordering(939) 00:12:24.698 fused_ordering(940) 00:12:24.698 fused_ordering(941) 00:12:24.698 fused_ordering(942) 00:12:24.698 fused_ordering(943) 00:12:24.698 fused_ordering(944) 00:12:24.698 fused_ordering(945) 00:12:24.698 fused_ordering(946) 00:12:24.698 fused_ordering(947) 00:12:24.698 fused_ordering(948) 00:12:24.698 fused_ordering(949) 00:12:24.698 fused_ordering(950) 00:12:24.698 fused_ordering(951) 00:12:24.698 fused_ordering(952) 00:12:24.698 fused_ordering(953) 00:12:24.698 fused_ordering(954) 00:12:24.698 fused_ordering(955) 00:12:24.698 fused_ordering(956) 00:12:24.698 fused_ordering(957) 00:12:24.698 fused_ordering(958) 00:12:24.698 fused_ordering(959) 00:12:24.698 fused_ordering(960) 00:12:24.698 fused_ordering(961) 00:12:24.698 fused_ordering(962) 00:12:24.698 fused_ordering(963) 00:12:24.698 fused_ordering(964) 00:12:24.698 fused_ordering(965) 00:12:24.698 fused_ordering(966) 00:12:24.698 fused_ordering(967) 00:12:24.698 fused_ordering(968) 00:12:24.698 fused_ordering(969) 00:12:24.698 fused_ordering(970) 00:12:24.698 fused_ordering(971) 00:12:24.698 fused_ordering(972) 00:12:24.698 fused_ordering(973) 00:12:24.698 fused_ordering(974) 00:12:24.698 fused_ordering(975) 00:12:24.698 fused_ordering(976) 00:12:24.698 fused_ordering(977) 00:12:24.698 fused_ordering(978) 00:12:24.698 fused_ordering(979) 00:12:24.698 fused_ordering(980) 00:12:24.698 fused_ordering(981) 00:12:24.698 fused_ordering(982) 00:12:24.698 fused_ordering(983) 00:12:24.698 fused_ordering(984) 00:12:24.698 fused_ordering(985) 00:12:24.698 fused_ordering(986) 00:12:24.698 fused_ordering(987) 00:12:24.698 fused_ordering(988) 00:12:24.698 fused_ordering(989) 00:12:24.698 fused_ordering(990) 00:12:24.698 fused_ordering(991) 00:12:24.698 fused_ordering(992) 00:12:24.698 fused_ordering(993) 00:12:24.698 fused_ordering(994) 00:12:24.698 fused_ordering(995) 00:12:24.698 fused_ordering(996) 00:12:24.698 fused_ordering(997) 00:12:24.698 fused_ordering(998) 00:12:24.698 fused_ordering(999) 00:12:24.698 fused_ordering(1000) 00:12:24.698 fused_ordering(1001) 00:12:24.698 fused_ordering(1002) 00:12:24.698 fused_ordering(1003) 00:12:24.698 fused_ordering(1004) 00:12:24.698 fused_ordering(1005) 00:12:24.698 fused_ordering(1006) 00:12:24.698 fused_ordering(1007) 00:12:24.698 fused_ordering(1008) 00:12:24.698 fused_ordering(1009) 00:12:24.698 fused_ordering(1010) 00:12:24.698 fused_ordering(1011) 00:12:24.698 fused_ordering(1012) 00:12:24.698 fused_ordering(1013) 00:12:24.698 fused_ordering(1014) 00:12:24.698 fused_ordering(1015) 00:12:24.698 fused_ordering(1016) 00:12:24.698 fused_ordering(1017) 00:12:24.698 fused_ordering(1018) 00:12:24.698 fused_ordering(1019) 00:12:24.698 fused_ordering(1020) 00:12:24.698 fused_ordering(1021) 00:12:24.698 fused_ordering(1022) 00:12:24.698 fused_ordering(1023) 00:12:24.698 07:05:08 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:24.698 07:05:08 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:24.698 07:05:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:24.698 07:05:08 -- nvmf/common.sh@116 -- # sync 00:12:24.957 07:05:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:24.957 07:05:08 -- nvmf/common.sh@119 -- # set +e 00:12:24.957 07:05:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:24.957 07:05:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:24.957 rmmod nvme_tcp 00:12:24.957 rmmod nvme_fabrics 00:12:24.957 rmmod nvme_keyring 00:12:24.957 07:05:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:24.957 07:05:08 -- nvmf/common.sh@123 -- # set -e 00:12:24.957 07:05:08 -- nvmf/common.sh@124 -- # return 0 00:12:24.957 07:05:08 -- nvmf/common.sh@477 -- # '[' -n 69817 ']' 00:12:24.957 07:05:08 -- nvmf/common.sh@478 -- # killprocess 69817 00:12:24.957 07:05:08 -- common/autotest_common.sh@926 -- # '[' -z 69817 ']' 00:12:24.957 07:05:08 -- common/autotest_common.sh@930 -- # kill -0 69817 00:12:24.957 07:05:08 -- common/autotest_common.sh@931 -- # uname 00:12:24.957 07:05:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:24.957 07:05:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69817 00:12:24.957 07:05:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:24.957 07:05:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:24.957 killing process with pid 69817 00:12:24.957 07:05:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69817' 00:12:24.957 07:05:08 -- common/autotest_common.sh@945 -- # kill 69817 00:12:24.957 07:05:08 -- common/autotest_common.sh@950 -- # wait 69817 00:12:25.216 07:05:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:25.217 07:05:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:25.217 07:05:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:25.217 07:05:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:25.217 07:05:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:25.217 07:05:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.217 07:05:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.217 07:05:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.217 07:05:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:25.217 00:12:25.217 real 0m3.742s 00:12:25.217 user 0m4.185s 00:12:25.217 sys 0m1.392s 00:12:25.217 07:05:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.217 ************************************ 00:12:25.217 END TEST nvmf_fused_ordering 00:12:25.217 ************************************ 00:12:25.217 07:05:09 -- common/autotest_common.sh@10 -- # set +x 00:12:25.217 07:05:09 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:25.217 07:05:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:25.217 07:05:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:25.217 07:05:09 -- common/autotest_common.sh@10 -- # set +x 00:12:25.217 ************************************ 00:12:25.217 START TEST nvmf_delete_subsystem 00:12:25.217 ************************************ 00:12:25.217 07:05:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:25.476 * Looking for test storage... 00:12:25.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:25.476 07:05:09 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:25.476 07:05:09 -- nvmf/common.sh@7 -- # uname -s 00:12:25.476 07:05:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.476 07:05:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.476 07:05:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.476 07:05:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.476 07:05:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.476 07:05:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.476 07:05:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.476 07:05:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.476 07:05:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.476 07:05:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.476 07:05:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:12:25.476 07:05:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:12:25.476 07:05:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.476 07:05:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.476 07:05:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:25.476 07:05:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.476 07:05:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.476 07:05:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.476 07:05:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.476 07:05:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.476 07:05:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.476 07:05:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.476 07:05:09 -- paths/export.sh@5 -- # export PATH 00:12:25.476 07:05:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.476 07:05:09 -- nvmf/common.sh@46 -- # : 0 00:12:25.476 07:05:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:25.476 07:05:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:25.476 07:05:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:25.476 07:05:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.476 07:05:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.476 07:05:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:25.476 07:05:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:25.476 07:05:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:25.476 07:05:09 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:25.476 07:05:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:25.476 07:05:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.476 07:05:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:25.476 07:05:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:25.476 07:05:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:25.476 07:05:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.476 07:05:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.476 07:05:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.476 07:05:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:25.476 07:05:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:25.476 07:05:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:25.476 07:05:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:25.476 07:05:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:25.476 07:05:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:25.476 07:05:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.476 07:05:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.476 07:05:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:25.476 07:05:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:25.476 07:05:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:25.476 07:05:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:25.477 07:05:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:25.477 07:05:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.477 07:05:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:25.477 07:05:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:25.477 07:05:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:25.477 07:05:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:25.477 07:05:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:25.477 07:05:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:25.477 Cannot find device "nvmf_tgt_br" 00:12:25.477 07:05:09 -- nvmf/common.sh@154 -- # true 00:12:25.477 07:05:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.477 Cannot find device "nvmf_tgt_br2" 00:12:25.477 07:05:09 -- nvmf/common.sh@155 -- # true 00:12:25.477 07:05:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:25.477 07:05:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:25.477 Cannot find device "nvmf_tgt_br" 00:12:25.477 07:05:09 -- nvmf/common.sh@157 -- # true 00:12:25.477 07:05:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:25.477 Cannot find device "nvmf_tgt_br2" 00:12:25.477 07:05:09 -- nvmf/common.sh@158 -- # true 00:12:25.477 07:05:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:25.477 07:05:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:25.477 07:05:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.477 07:05:09 -- nvmf/common.sh@161 -- # true 00:12:25.477 07:05:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.477 07:05:09 -- nvmf/common.sh@162 -- # true 00:12:25.477 07:05:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:25.477 07:05:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:25.477 07:05:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:25.477 07:05:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:25.736 07:05:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:25.736 07:05:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:25.736 07:05:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:25.736 07:05:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:25.736 07:05:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:25.736 07:05:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:25.736 07:05:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:25.736 07:05:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:25.736 07:05:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:25.736 07:05:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:25.736 07:05:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:25.736 07:05:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:25.736 07:05:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:25.736 07:05:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:25.736 07:05:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:25.736 07:05:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:25.736 07:05:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:25.736 07:05:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:25.736 07:05:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:25.736 07:05:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:25.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:12:25.736 00:12:25.736 --- 10.0.0.2 ping statistics --- 00:12:25.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.736 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:12:25.736 07:05:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:25.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:25.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:12:25.736 00:12:25.736 --- 10.0.0.3 ping statistics --- 00:12:25.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.736 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:25.736 07:05:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:25.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:12:25.736 00:12:25.736 --- 10.0.0.1 ping statistics --- 00:12:25.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.736 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:12:25.736 07:05:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.736 07:05:09 -- nvmf/common.sh@421 -- # return 0 00:12:25.736 07:05:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:25.736 07:05:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.736 07:05:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:25.736 07:05:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:25.736 07:05:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.736 07:05:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:25.736 07:05:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:25.736 07:05:09 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:25.736 07:05:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:25.736 07:05:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:25.736 07:05:09 -- common/autotest_common.sh@10 -- # set +x 00:12:25.736 07:05:09 -- nvmf/common.sh@469 -- # nvmfpid=70048 00:12:25.736 07:05:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:25.736 07:05:09 -- nvmf/common.sh@470 -- # waitforlisten 70048 00:12:25.736 07:05:09 -- common/autotest_common.sh@819 -- # '[' -z 70048 ']' 00:12:25.736 07:05:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.736 07:05:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:25.736 07:05:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.736 07:05:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:25.736 07:05:09 -- common/autotest_common.sh@10 -- # set +x 00:12:25.736 [2024-07-11 07:05:09.792003] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:25.736 [2024-07-11 07:05:09.792058] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.995 [2024-07-11 07:05:09.927299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:25.995 [2024-07-11 07:05:10.035987] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:25.995 [2024-07-11 07:05:10.036162] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.995 [2024-07-11 07:05:10.036178] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.995 [2024-07-11 07:05:10.036191] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.995 [2024-07-11 07:05:10.036337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.995 [2024-07-11 07:05:10.036357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.933 07:05:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:26.933 07:05:10 -- common/autotest_common.sh@852 -- # return 0 00:12:26.933 07:05:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:26.933 07:05:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:26.933 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:26.933 07:05:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.933 07:05:10 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:26.933 07:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.933 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:26.933 [2024-07-11 07:05:10.809881] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.933 07:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.933 07:05:10 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:26.933 07:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.933 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:26.933 07:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.933 07:05:10 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.933 07:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.933 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:26.933 [2024-07-11 07:05:10.825951] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.933 07:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.933 07:05:10 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:26.933 07:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.933 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:26.933 NULL1 00:12:26.933 07:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.933 07:05:10 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:26.933 07:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.933 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:26.933 Delay0 00:12:26.933 07:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.933 07:05:10 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.933 07:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.933 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:26.933 07:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.933 07:05:10 -- target/delete_subsystem.sh@28 -- # perf_pid=70105 00:12:26.933 07:05:10 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:26.933 07:05:10 -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:27.190 [2024-07-11 07:05:11.030766] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:29.092 07:05:12 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.092 07:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.092 07:05:12 -- common/autotest_common.sh@10 -- # set +x 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 [2024-07-11 07:05:13.071477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1305af0 is same with the state(5) to be set 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 starting I/O failed: -6 00:12:29.092 Write completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.092 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 starting I/O failed: -6 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 starting I/O failed: -6 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 starting I/O failed: -6 00:12:29.093 [2024-07-11 07:05:13.072997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1ee4000c00 is same with the state(5) to be set 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Write completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:29.093 Read completed with error (sct=0, sc=8) 00:12:30.029 [2024-07-11 07:05:14.048598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1323f80 is same with the state(5) to be set 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 [2024-07-11 07:05:14.069017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1ee400c480 is same with the state(5) to be set 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Write completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.029 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 [2024-07-11 07:05:14.069240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1ee400bf20 is same with the state(5) to be set 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Write completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Write completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Write completed with error (sct=0, sc=8) 00:12:30.030 Write completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 [2024-07-11 07:05:14.073966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1305840 is same with the state(5) to be set 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Write completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Write completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Read completed with error (sct=0, sc=8) 00:12:30.030 Write completed with error (sct=0, sc=8) 00:12:30.030 [2024-07-11 07:05:14.074395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1324c10 is same with the state(5) to be set 00:12:30.030 [2024-07-11 07:05:14.075239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1323f80 (9): Bad file descriptor 00:12:30.030 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:30.030 07:05:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.030 07:05:14 -- target/delete_subsystem.sh@34 -- # delay=0 00:12:30.030 07:05:14 -- target/delete_subsystem.sh@35 -- # kill -0 70105 00:12:30.030 07:05:14 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:30.030 Initializing NVMe Controllers 00:12:30.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:30.030 Controller IO queue size 128, less than required. 00:12:30.030 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:30.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:30.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:30.030 Initialization complete. Launching workers. 00:12:30.030 ======================================================== 00:12:30.030 Latency(us) 00:12:30.030 Device Information : IOPS MiB/s Average min max 00:12:30.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 146.10 0.07 957655.95 784.05 1015936.27 00:12:30.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.42 0.08 941130.49 363.67 1997225.93 00:12:30.030 ======================================================== 00:12:30.030 Total : 311.52 0.15 948880.91 363.67 1997225.93 00:12:30.030 00:12:30.597 07:05:14 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:30.597 07:05:14 -- target/delete_subsystem.sh@35 -- # kill -0 70105 00:12:30.597 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (70105) - No such process 00:12:30.597 07:05:14 -- target/delete_subsystem.sh@45 -- # NOT wait 70105 00:12:30.597 07:05:14 -- common/autotest_common.sh@640 -- # local es=0 00:12:30.597 07:05:14 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 70105 00:12:30.597 07:05:14 -- common/autotest_common.sh@628 -- # local arg=wait 00:12:30.597 07:05:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:30.597 07:05:14 -- common/autotest_common.sh@632 -- # type -t wait 00:12:30.597 07:05:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:30.597 07:05:14 -- common/autotest_common.sh@643 -- # wait 70105 00:12:30.597 07:05:14 -- common/autotest_common.sh@643 -- # es=1 00:12:30.597 07:05:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:30.597 07:05:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:30.597 07:05:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:30.597 07:05:14 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:30.597 07:05:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.597 07:05:14 -- common/autotest_common.sh@10 -- # set +x 00:12:30.597 07:05:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.597 07:05:14 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.597 07:05:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.597 07:05:14 -- common/autotest_common.sh@10 -- # set +x 00:12:30.597 [2024-07-11 07:05:14.602359] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.597 07:05:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.597 07:05:14 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.597 07:05:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.597 07:05:14 -- common/autotest_common.sh@10 -- # set +x 00:12:30.597 07:05:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.597 07:05:14 -- target/delete_subsystem.sh@54 -- # perf_pid=70146 00:12:30.597 07:05:14 -- target/delete_subsystem.sh@56 -- # delay=0 00:12:30.597 07:05:14 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:30.597 07:05:14 -- target/delete_subsystem.sh@57 -- # kill -0 70146 00:12:30.597 07:05:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:30.855 [2024-07-11 07:05:14.770750] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:31.113 07:05:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:31.113 07:05:15 -- target/delete_subsystem.sh@57 -- # kill -0 70146 00:12:31.113 07:05:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:31.686 07:05:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:31.686 07:05:15 -- target/delete_subsystem.sh@57 -- # kill -0 70146 00:12:31.686 07:05:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:32.252 07:05:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:32.252 07:05:16 -- target/delete_subsystem.sh@57 -- # kill -0 70146 00:12:32.252 07:05:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:32.816 07:05:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:32.816 07:05:16 -- target/delete_subsystem.sh@57 -- # kill -0 70146 00:12:32.816 07:05:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:33.381 07:05:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:33.381 07:05:17 -- target/delete_subsystem.sh@57 -- # kill -0 70146 00:12:33.381 07:05:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:33.639 07:05:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:33.639 07:05:17 -- target/delete_subsystem.sh@57 -- # kill -0 70146 00:12:33.639 07:05:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:33.897 Initializing NVMe Controllers 00:12:33.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:33.897 Controller IO queue size 128, less than required. 00:12:33.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:33.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:33.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:33.897 Initialization complete. Launching workers. 00:12:33.897 ======================================================== 00:12:33.897 Latency(us) 00:12:33.897 Device Information : IOPS MiB/s Average min max 00:12:33.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003509.34 1000113.91 1013503.73 00:12:33.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005479.94 1000180.94 1041719.27 00:12:33.897 ======================================================== 00:12:33.897 Total : 256.00 0.12 1004494.64 1000113.91 1041719.27 00:12:33.897 00:12:34.156 07:05:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:34.156 07:05:18 -- target/delete_subsystem.sh@57 -- # kill -0 70146 00:12:34.156 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (70146) - No such process 00:12:34.156 07:05:18 -- target/delete_subsystem.sh@67 -- # wait 70146 00:12:34.156 07:05:18 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:34.156 07:05:18 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:34.156 07:05:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:34.156 07:05:18 -- nvmf/common.sh@116 -- # sync 00:12:34.156 07:05:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:34.156 07:05:18 -- nvmf/common.sh@119 -- # set +e 00:12:34.156 07:05:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:34.156 07:05:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:34.156 rmmod nvme_tcp 00:12:34.156 rmmod nvme_fabrics 00:12:34.156 rmmod nvme_keyring 00:12:34.414 07:05:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:34.414 07:05:18 -- nvmf/common.sh@123 -- # set -e 00:12:34.414 07:05:18 -- nvmf/common.sh@124 -- # return 0 00:12:34.414 07:05:18 -- nvmf/common.sh@477 -- # '[' -n 70048 ']' 00:12:34.414 07:05:18 -- nvmf/common.sh@478 -- # killprocess 70048 00:12:34.414 07:05:18 -- common/autotest_common.sh@926 -- # '[' -z 70048 ']' 00:12:34.414 07:05:18 -- common/autotest_common.sh@930 -- # kill -0 70048 00:12:34.414 07:05:18 -- common/autotest_common.sh@931 -- # uname 00:12:34.414 07:05:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:34.414 07:05:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70048 00:12:34.414 killing process with pid 70048 00:12:34.414 07:05:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:34.414 07:05:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:34.414 07:05:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70048' 00:12:34.414 07:05:18 -- common/autotest_common.sh@945 -- # kill 70048 00:12:34.414 07:05:18 -- common/autotest_common.sh@950 -- # wait 70048 00:12:34.673 07:05:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:34.673 07:05:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:34.673 07:05:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:34.673 07:05:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.673 07:05:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:34.673 07:05:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.673 07:05:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.673 07:05:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.673 07:05:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:34.673 00:12:34.673 real 0m9.390s 00:12:34.673 user 0m29.018s 00:12:34.673 sys 0m1.259s 00:12:34.673 07:05:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:34.673 07:05:18 -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 ************************************ 00:12:34.673 END TEST nvmf_delete_subsystem 00:12:34.673 ************************************ 00:12:34.673 07:05:18 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:12:34.673 07:05:18 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:12:34.673 07:05:18 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:34.673 07:05:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:34.673 07:05:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:34.673 07:05:18 -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 ************************************ 00:12:34.673 START TEST nvmf_vfio_user 00:12:34.673 ************************************ 00:12:34.673 07:05:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:34.932 * Looking for test storage... 00:12:34.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:34.932 07:05:18 -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:34.932 07:05:18 -- nvmf/common.sh@7 -- # uname -s 00:12:34.932 07:05:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.932 07:05:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.932 07:05:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.932 07:05:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.932 07:05:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.932 07:05:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.932 07:05:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.932 07:05:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.932 07:05:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.932 07:05:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.932 07:05:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:12:34.932 07:05:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:12:34.932 07:05:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.932 07:05:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.932 07:05:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:34.932 07:05:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:34.932 07:05:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.932 07:05:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.932 07:05:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.932 07:05:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.932 07:05:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.933 07:05:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.933 07:05:18 -- paths/export.sh@5 -- # export PATH 00:12:34.933 07:05:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.933 07:05:18 -- nvmf/common.sh@46 -- # : 0 00:12:34.933 07:05:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:34.933 07:05:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:34.933 07:05:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:34.933 07:05:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.933 07:05:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.933 07:05:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:34.933 07:05:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:34.933 07:05:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:34.933 Process pid: 70275 00:12:34.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=70275 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 70275' 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 70275 00:12:34.933 07:05:18 -- common/autotest_common.sh@819 -- # '[' -z 70275 ']' 00:12:34.933 07:05:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.933 07:05:18 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:34.933 07:05:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:34.933 07:05:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.933 07:05:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:34.933 07:05:18 -- common/autotest_common.sh@10 -- # set +x 00:12:34.933 [2024-07-11 07:05:18.863937] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:34.933 [2024-07-11 07:05:18.864020] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.192 [2024-07-11 07:05:19.002918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.192 [2024-07-11 07:05:19.090742] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:35.192 [2024-07-11 07:05:19.090901] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.192 [2024-07-11 07:05:19.090915] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.192 [2024-07-11 07:05:19.090924] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.192 [2024-07-11 07:05:19.091073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.192 [2024-07-11 07:05:19.091419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.192 [2024-07-11 07:05:19.091559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.192 [2024-07-11 07:05:19.091571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.758 07:05:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:35.758 07:05:19 -- common/autotest_common.sh@852 -- # return 0 00:12:35.758 07:05:19 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:36.714 07:05:20 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:36.973 07:05:20 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:36.973 07:05:20 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:36.973 07:05:20 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:36.973 07:05:20 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:36.973 07:05:20 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:37.232 Malloc1 00:12:37.232 07:05:21 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:37.491 07:05:21 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:37.750 07:05:21 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:38.008 07:05:21 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:38.008 07:05:21 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:38.008 07:05:21 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:38.267 Malloc2 00:12:38.267 07:05:22 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:38.526 07:05:22 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:38.783 07:05:22 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:39.042 07:05:22 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:39.042 07:05:22 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:39.042 07:05:22 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:39.042 07:05:22 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:39.042 07:05:22 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:39.043 07:05:22 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:39.043 [2024-07-11 07:05:22.921421] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:39.043 [2024-07-11 07:05:22.921511] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70406 ] 00:12:39.043 [2024-07-11 07:05:23.058565] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:39.043 [2024-07-11 07:05:23.067961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:39.043 [2024-07-11 07:05:23.068010] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc5f96d8000 00:12:39.043 [2024-07-11 07:05:23.068955] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:39.043 [2024-07-11 07:05:23.069944] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:39.043 [2024-07-11 07:05:23.070964] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:39.043 [2024-07-11 07:05:23.071961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:39.043 [2024-07-11 07:05:23.072973] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:39.043 [2024-07-11 07:05:23.073983] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:39.043 [2024-07-11 07:05:23.074989] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:39.043 [2024-07-11 07:05:23.075994] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:39.043 [2024-07-11 07:05:23.077008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:39.043 [2024-07-11 07:05:23.077032] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc5f96cd000 00:12:39.043 [2024-07-11 07:05:23.081013] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:39.043 [2024-07-11 07:05:23.090730] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:39.043 [2024-07-11 07:05:23.090793] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:39.043 [2024-07-11 07:05:23.096104] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:39.043 [2024-07-11 07:05:23.096177] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:39.043 [2024-07-11 07:05:23.096266] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:39.043 [2024-07-11 07:05:23.096297] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:39.043 [2024-07-11 07:05:23.096304] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:39.043 [2024-07-11 07:05:23.097101] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:39.043 [2024-07-11 07:05:23.097126] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:39.043 [2024-07-11 07:05:23.097138] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:39.043 [2024-07-11 07:05:23.098106] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:39.043 [2024-07-11 07:05:23.098141] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:39.043 [2024-07-11 07:05:23.098153] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:39.043 [2024-07-11 07:05:23.099114] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:39.043 [2024-07-11 07:05:23.099141] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:39.043 [2024-07-11 07:05:23.100124] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:39.043 [2024-07-11 07:05:23.100153] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:39.043 [2024-07-11 07:05:23.100161] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:39.043 [2024-07-11 07:05:23.100180] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:39.043 [2024-07-11 07:05:23.100286] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:39.043 [2024-07-11 07:05:23.100292] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:39.043 [2024-07-11 07:05:23.100297] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:39.043 [2024-07-11 07:05:23.101135] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:39.303 [2024-07-11 07:05:23.102150] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:39.303 [2024-07-11 07:05:23.103146] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:39.303 [2024-07-11 07:05:23.105473] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:39.303 [2024-07-11 07:05:23.106165] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:39.303 [2024-07-11 07:05:23.106188] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:39.303 [2024-07-11 07:05:23.106196] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:39.303 [2024-07-11 07:05:23.106224] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:39.303 [2024-07-11 07:05:23.106241] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:39.303 [2024-07-11 07:05:23.106260] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:39.303 [2024-07-11 07:05:23.106267] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:39.303 [2024-07-11 07:05:23.106317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:39.303 [2024-07-11 07:05:23.106386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:39.303 [2024-07-11 07:05:23.106399] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:39.303 [2024-07-11 07:05:23.106405] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:39.303 [2024-07-11 07:05:23.106409] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:39.303 [2024-07-11 07:05:23.106414] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:39.303 [2024-07-11 07:05:23.106418] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:39.303 [2024-07-11 07:05:23.106423] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:39.303 [2024-07-11 07:05:23.106427] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:39.303 [2024-07-11 07:05:23.106441] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:39.303 [2024-07-11 07:05:23.106471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:39.303 [2024-07-11 07:05:23.106491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:39.303 [2024-07-11 07:05:23.106509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.303 [2024-07-11 07:05:23.106519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.303 [2024-07-11 07:05:23.106527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.303 [2024-07-11 07:05:23.106536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.303 [2024-07-11 07:05:23.106541] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106555] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:39.304 [2024-07-11 07:05:23.106580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:39.304 [2024-07-11 07:05:23.106588] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:39.304 [2024-07-11 07:05:23.106593] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106601] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106611] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:39.304 [2024-07-11 07:05:23.106630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:39.304 [2024-07-11 07:05:23.106682] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106692] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106701] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:39.304 [2024-07-11 07:05:23.106706] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:39.304 [2024-07-11 07:05:23.106713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:39.304 [2024-07-11 07:05:23.106729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:39.304 [2024-07-11 07:05:23.106746] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:39.304 [2024-07-11 07:05:23.106758] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106768] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106778] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:39.304 [2024-07-11 07:05:23.106783] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:39.304 [2024-07-11 07:05:23.106790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:39.304 [2024-07-11 07:05:23.106814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:39.304 [2024-07-11 07:05:23.106839] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106851] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106859] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:39.304 [2024-07-11 07:05:23.106864] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:39.304 [2024-07-11 07:05:23.106870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:39.304 [2024-07-11 07:05:23.106885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:39.304 [2024-07-11 07:05:23.106895] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106902] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106913] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106920] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106925] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106930] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:39.304 [2024-07-11 07:05:23.106934] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:39.304 [2024-07-11 07:05:23.106939] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:39.304 [2024-07-11 07:05:23.106961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:39.304 [2024-07-11 07:05:23.106974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:39.304 [2024-07-11 07:05:23.106989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:39.304 [2024-07-11 07:05:23.106997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:39.304 [2024-07-11 07:05:23.107009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:39.304 [2024-07-11 07:05:23.107020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:39.304 [2024-07-11 07:05:23.107033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:39.304 [2024-07-11 07:05:23.107040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:39.304 [2024-07-11 07:05:23.107054] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:39.304 [2024-07-11 07:05:23.107060] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:39.304 [2024-07-11 07:05:23.107064] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:39.304 [2024-07-11 07:05:23.107067] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:39.304 [2024-07-11 07:05:23.107074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:39.304 [2024-07-11 07:05:23.107081] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:39.304 [2024-07-11 07:05:23.107086] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:39.304 [2024-07-11 07:05:23.107092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:39.304 ===================================================== 00:12:39.304 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:39.304 ===================================================== 00:12:39.304 Controller Capabilities/Features 00:12:39.304 ================================ 00:12:39.304 Vendor ID: 4e58 00:12:39.304 Subsystem Vendor ID: 4e58 00:12:39.304 Serial Number: SPDK1 00:12:39.304 Model Number: SPDK bdev Controller 00:12:39.304 Firmware Version: 24.01.1 00:12:39.304 Recommended Arb Burst: 6 00:12:39.304 IEEE OUI Identifier: 8d 6b 50 00:12:39.304 Multi-path I/O 00:12:39.304 May have multiple subsystem ports: Yes 00:12:39.304 May have multiple controllers: Yes 00:12:39.304 Associated with SR-IOV VF: No 00:12:39.304 Max Data Transfer Size: 131072 00:12:39.304 Max Number of Namespaces: 32 00:12:39.304 Max Number of I/O Queues: 127 00:12:39.304 NVMe Specification Version (VS): 1.3 00:12:39.304 NVMe Specification Version (Identify): 1.3 00:12:39.304 Maximum Queue Entries: 256 00:12:39.304 Contiguous Queues Required: Yes 00:12:39.304 Arbitration Mechanisms Supported 00:12:39.304 Weighted Round Robin: Not Supported 00:12:39.304 Vendor Specific: Not Supported 00:12:39.304 Reset Timeout: 15000 ms 00:12:39.304 Doorbell Stride: 4 bytes 00:12:39.304 NVM Subsystem Reset: Not Supported 00:12:39.304 Command Sets Supported 00:12:39.304 NVM Command Set: Supported 00:12:39.304 Boot Partition: Not Supported 00:12:39.304 Memory Page Size Minimum: 4096 bytes 00:12:39.304 Memory Page Size Maximum: 4096 bytes 00:12:39.304 Persistent Memory Region: Not Supported 00:12:39.304 Optional Asynchronous Events Supported 00:12:39.304 Namespace Attribute Notices: Supported 00:12:39.304 Firmware Activation Notices: Not Supported 00:12:39.304 ANA Change Notices: Not Supported 00:12:39.304 PLE Aggregate Log Change Notices: Not Supported 00:12:39.304 LBA Status Info Alert Notices: Not Supported 00:12:39.304 EGE Aggregate Log Change Notices: Not Supported 00:12:39.304 Normal NVM Subsystem Shutdown event: Not Supported 00:12:39.304 Zone Descriptor Change Notices: Not Supported 00:12:39.304 Discovery Log Change Notices: Not Supported 00:12:39.304 Controller Attributes 00:12:39.304 128-bit Host Identifier: Supported 00:12:39.304 Non-Operational Permissive Mode: Not Supported 00:12:39.304 NVM Sets: Not Supported 00:12:39.304 Read Recovery Levels: Not Supported 00:12:39.304 Endurance Groups: Not Supported 00:12:39.304 Predictable Latency Mode: Not Supported 00:12:39.304 Traffic Based Keep ALive: Not Supported 00:12:39.304 Namespace Granularity: Not Supported 00:12:39.304 SQ Associations: Not Supported 00:12:39.304 UUID List: Not Supported 00:12:39.304 Multi-Domain Subsystem: Not Supported 00:12:39.304 Fixed Capacity Management: Not Supported 00:12:39.304 Variable Capacity Management: Not Supported 00:12:39.304 Delete Endurance Group: Not Supported 00:12:39.304 Delete NVM Set: Not Supported 00:12:39.304 Extended LBA Formats Supported: Not Supported 00:12:39.304 Flexible Data Placement Supported: Not Supported 00:12:39.304 00:12:39.304 Controller Memory Buffer Support 00:12:39.304 ================================ 00:12:39.304 Supported: No 00:12:39.304 00:12:39.304 Persistent Memory Region Support 00:12:39.304 ================================ 00:12:39.304 Supported: No 00:12:39.304 00:12:39.304 Admin Command Set Attributes 00:12:39.304 ============================ 00:12:39.304 Security Send/Receive: Not Supported 00:12:39.304 Format NVM: Not Supported 00:12:39.305 Firmware Activate/Download: Not Supported 00:12:39.305 Namespace Management: Not Supported 00:12:39.305 Device Self-Test: Not Supported 00:12:39.305 Directives: Not Supported 00:12:39.305 NVMe-MI: Not Supported 00:12:39.305 Virtualization Management: Not Supported 00:12:39.305 Doorbell Buffer Config: Not Supported 00:12:39.305 Get LBA Status Capability: Not Supported 00:12:39.305 Command & Feature Lockdown Capability: Not Supported 00:12:39.305 Abort Command Limit: 4 00:12:39.305 Async Event Request Limit: 4 00:12:39.305 Number of Firmware Slots: N/A 00:12:39.305 Firmware Slot 1 Read-Only: N/A 00:12:39.305 Firmware Activation Wit[2024-07-11 07:05:23.107099] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:39.305 [2024-07-11 07:05:23.107104] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:39.305 [2024-07-11 07:05:23.107110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:39.305 [2024-07-11 07:05:23.107127] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:39.305 [2024-07-11 07:05:23.107133] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:39.305 [2024-07-11 07:05:23.107140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:39.305 [2024-07-11 07:05:23.107148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:39.305 [2024-07-11 07:05:23.107166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:39.305 [2024-07-11 07:05:23.107179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:39.305 [2024-07-11 07:05:23.107187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:39.305 hout Reset: N/A 00:12:39.305 Multiple Update Detection Support: N/A 00:12:39.305 Firmware Update Granularity: No Information Provided 00:12:39.305 Per-Namespace SMART Log: No 00:12:39.305 Asymmetric Namespace Access Log Page: Not Supported 00:12:39.305 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:39.305 Command Effects Log Page: Supported 00:12:39.305 Get Log Page Extended Data: Supported 00:12:39.305 Telemetry Log Pages: Not Supported 00:12:39.305 Persistent Event Log Pages: Not Supported 00:12:39.305 Supported Log Pages Log Page: May Support 00:12:39.305 Commands Supported & Effects Log Page: Not Supported 00:12:39.305 Feature Identifiers & Effects Log Page:May Support 00:12:39.305 NVMe-MI Commands & Effects Log Page: May Support 00:12:39.305 Data Area 4 for Telemetry Log: Not Supported 00:12:39.305 Error Log Page Entries Supported: 128 00:12:39.305 Keep Alive: Supported 00:12:39.305 Keep Alive Granularity: 10000 ms 00:12:39.305 00:12:39.305 NVM Command Set Attributes 00:12:39.305 ========================== 00:12:39.305 Submission Queue Entry Size 00:12:39.305 Max: 64 00:12:39.305 Min: 64 00:12:39.305 Completion Queue Entry Size 00:12:39.305 Max: 16 00:12:39.305 Min: 16 00:12:39.305 Number of Namespaces: 32 00:12:39.305 Compare Command: Supported 00:12:39.305 Write Uncorrectable Command: Not Supported 00:12:39.305 Dataset Management Command: Supported 00:12:39.305 Write Zeroes Command: Supported 00:12:39.305 Set Features Save Field: Not Supported 00:12:39.305 Reservations: Not Supported 00:12:39.305 Timestamp: Not Supported 00:12:39.305 Copy: Supported 00:12:39.305 Volatile Write Cache: Present 00:12:39.305 Atomic Write Unit (Normal): 1 00:12:39.305 Atomic Write Unit (PFail): 1 00:12:39.305 Atomic Compare & Write Unit: 1 00:12:39.305 Fused Compare & Write: Supported 00:12:39.305 Scatter-Gather List 00:12:39.305 SGL Command Set: Supported (Dword aligned) 00:12:39.305 SGL Keyed: Not Supported 00:12:39.305 SGL Bit Bucket Descriptor: Not Supported 00:12:39.305 SGL Metadata Pointer: Not Supported 00:12:39.305 Oversized SGL: Not Supported 00:12:39.305 SGL Metadata Address: Not Supported 00:12:39.305 SGL Offset: Not Supported 00:12:39.305 Transport SGL Data Block: Not Supported 00:12:39.305 Replay Protected Memory Block: Not Supported 00:12:39.305 00:12:39.305 Firmware Slot Information 00:12:39.305 ========================= 00:12:39.305 Active slot: 1 00:12:39.305 Slot 1 Firmware Revision: 24.01.1 00:12:39.305 00:12:39.305 00:12:39.305 Commands Supported and Effects 00:12:39.305 ============================== 00:12:39.305 Admin Commands 00:12:39.305 -------------- 00:12:39.305 Get Log Page (02h): Supported 00:12:39.305 Identify (06h): Supported 00:12:39.305 Abort (08h): Supported 00:12:39.305 Set Features (09h): Supported 00:12:39.305 Get Features (0Ah): Supported 00:12:39.305 Asynchronous Event Request (0Ch): Supported 00:12:39.305 Keep Alive (18h): Supported 00:12:39.305 I/O Commands 00:12:39.305 ------------ 00:12:39.305 Flush (00h): Supported LBA-Change 00:12:39.305 Write (01h): Supported LBA-Change 00:12:39.305 Read (02h): Supported 00:12:39.305 Compare (05h): Supported 00:12:39.305 Write Zeroes (08h): Supported LBA-Change 00:12:39.305 Dataset Management (09h): Supported LBA-Change 00:12:39.305 Copy (19h): Supported LBA-Change 00:12:39.305 Unknown (79h): Supported LBA-Change 00:12:39.305 Unknown (7Ah): Supported 00:12:39.305 00:12:39.305 Error Log 00:12:39.305 ========= 00:12:39.305 00:12:39.305 Arbitration 00:12:39.305 =========== 00:12:39.305 Arbitration Burst: 1 00:12:39.305 00:12:39.305 Power Management 00:12:39.305 ================ 00:12:39.305 Number of Power States: 1 00:12:39.305 Current Power State: Power State #0 00:12:39.305 Power State #0: 00:12:39.305 Max Power: 0.00 W 00:12:39.305 Non-Operational State: Operational 00:12:39.305 Entry Latency: Not Reported 00:12:39.305 Exit Latency: Not Reported 00:12:39.305 Relative Read Throughput: 0 00:12:39.305 Relative Read Latency: 0 00:12:39.305 Relative Write Throughput: 0 00:12:39.305 Relative Write Latency: 0 00:12:39.305 Idle Power: Not Reported 00:12:39.305 Active Power: Not Reported 00:12:39.305 Non-Operational Permissive Mode: Not Supported 00:12:39.305 00:12:39.305 Health Information 00:12:39.305 ================== 00:12:39.305 Critical Warnings: 00:12:39.305 Available Spare Space: OK 00:12:39.305 Temperature: OK 00:12:39.305 Device Reliability: OK 00:12:39.305 Read Only: No 00:12:39.305 Volatile Memory Backup: OK 00:12:39.305 Current Temperature: 0 Kelvin[2024-07-11 07:05:23.107306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:39.305 [2024-07-11 07:05:23.107324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:39.305 [2024-07-11 07:05:23.107357] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:39.305 [2024-07-11 07:05:23.107370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.305 [2024-07-11 07:05:23.107377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.305 [2024-07-11 07:05:23.107383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.305 [2024-07-11 07:05:23.107389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.305 [2024-07-11 07:05:23.110482] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:39.305 [2024-07-11 07:05:23.110528] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:39.305 [2024-07-11 07:05:23.111254] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:39.305 [2024-07-11 07:05:23.111272] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:39.305 [2024-07-11 07:05:23.112191] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:39.305 [2024-07-11 07:05:23.112232] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:39.305 [2024-07-11 07:05:23.112288] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:39.305 [2024-07-11 07:05:23.114242] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:39.305 (-273 Celsius) 00:12:39.305 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:39.305 Available Spare: 0% 00:12:39.305 Available Spare Threshold: 0% 00:12:39.305 Life Percentage Used: 0% 00:12:39.305 Data Units Read: 0 00:12:39.305 Data Units Written: 0 00:12:39.305 Host Read Commands: 0 00:12:39.305 Host Write Commands: 0 00:12:39.305 Controller Busy Time: 0 minutes 00:12:39.305 Power Cycles: 0 00:12:39.305 Power On Hours: 0 hours 00:12:39.305 Unsafe Shutdowns: 0 00:12:39.305 Unrecoverable Media Errors: 0 00:12:39.305 Lifetime Error Log Entries: 0 00:12:39.305 Warning Temperature Time: 0 minutes 00:12:39.305 Critical Temperature Time: 0 minutes 00:12:39.305 00:12:39.305 Number of Queues 00:12:39.305 ================ 00:12:39.305 Number of I/O Submission Queues: 127 00:12:39.305 Number of I/O Completion Queues: 127 00:12:39.305 00:12:39.305 Active Namespaces 00:12:39.305 ================= 00:12:39.305 Namespace ID:1 00:12:39.305 Error Recovery Timeout: Unlimited 00:12:39.305 Command Set Identifier: NVM (00h) 00:12:39.306 Deallocate: Supported 00:12:39.306 Deallocated/Unwritten Error: Not Supported 00:12:39.306 Deallocated Read Value: Unknown 00:12:39.306 Deallocate in Write Zeroes: Not Supported 00:12:39.306 Deallocated Guard Field: 0xFFFF 00:12:39.306 Flush: Supported 00:12:39.306 Reservation: Supported 00:12:39.306 Namespace Sharing Capabilities: Multiple Controllers 00:12:39.306 Size (in LBAs): 131072 (0GiB) 00:12:39.306 Capacity (in LBAs): 131072 (0GiB) 00:12:39.306 Utilization (in LBAs): 131072 (0GiB) 00:12:39.306 NGUID: 39A2B785B2F24480A3C9EAD19B0AE1BF 00:12:39.306 UUID: 39a2b785-b2f2-4480-a3c9-ead19b0ae1bf 00:12:39.306 Thin Provisioning: Not Supported 00:12:39.306 Per-NS Atomic Units: Yes 00:12:39.306 Atomic Boundary Size (Normal): 0 00:12:39.306 Atomic Boundary Size (PFail): 0 00:12:39.306 Atomic Boundary Offset: 0 00:12:39.306 Maximum Single Source Range Length: 65535 00:12:39.306 Maximum Copy Length: 65535 00:12:39.306 Maximum Source Range Count: 1 00:12:39.306 NGUID/EUI64 Never Reused: No 00:12:39.306 Namespace Write Protected: No 00:12:39.306 Number of LBA Formats: 1 00:12:39.306 Current LBA Format: LBA Format #00 00:12:39.306 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:39.306 00:12:39.306 07:05:23 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:44.572 Initializing NVMe Controllers 00:12:44.572 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:44.572 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:44.572 Initialization complete. Launching workers. 00:12:44.572 ======================================================== 00:12:44.572 Latency(us) 00:12:44.572 Device Information : IOPS MiB/s Average min max 00:12:44.572 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39388.60 153.86 3251.81 1008.75 9693.87 00:12:44.572 ======================================================== 00:12:44.572 Total : 39388.60 153.86 3251.81 1008.75 9693.87 00:12:44.572 00:12:44.572 07:05:28 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:49.841 Initializing NVMe Controllers 00:12:49.841 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:49.841 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:49.841 Initialization complete. Launching workers. 00:12:49.841 ======================================================== 00:12:49.841 Latency(us) 00:12:49.841 Device Information : IOPS MiB/s Average min max 00:12:49.841 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16077.70 62.80 7967.49 4988.24 14508.30 00:12:49.841 ======================================================== 00:12:49.841 Total : 16077.70 62.80 7967.49 4988.24 14508.30 00:12:49.841 00:12:49.841 07:05:33 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:55.105 Initializing NVMe Controllers 00:12:55.105 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:55.105 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:55.105 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:55.105 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:55.105 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:55.105 Initialization complete. Launching workers. 00:12:55.105 Starting thread on core 2 00:12:55.105 Starting thread on core 3 00:12:55.105 Starting thread on core 1 00:12:55.105 07:05:39 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:59.292 Initializing NVMe Controllers 00:12:59.292 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:59.292 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:59.292 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:59.292 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:59.292 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:59.292 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:59.292 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:59.292 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:59.292 Initialization complete. Launching workers. 00:12:59.292 Starting thread on core 1 with urgent priority queue 00:12:59.292 Starting thread on core 2 with urgent priority queue 00:12:59.292 Starting thread on core 3 with urgent priority queue 00:12:59.292 Starting thread on core 0 with urgent priority queue 00:12:59.292 SPDK bdev Controller (SPDK1 ) core 0: 4322.67 IO/s 23.13 secs/100000 ios 00:12:59.292 SPDK bdev Controller (SPDK1 ) core 1: 4319.67 IO/s 23.15 secs/100000 ios 00:12:59.292 SPDK bdev Controller (SPDK1 ) core 2: 3276.33 IO/s 30.52 secs/100000 ios 00:12:59.292 SPDK bdev Controller (SPDK1 ) core 3: 3061.00 IO/s 32.67 secs/100000 ios 00:12:59.292 ======================================================== 00:12:59.292 00:12:59.292 07:05:42 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:59.292 Initializing NVMe Controllers 00:12:59.292 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:59.292 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:59.292 Namespace ID: 1 size: 0GB 00:12:59.292 Initialization complete. 00:12:59.292 INFO: using host memory buffer for IO 00:12:59.292 Hello world! 00:12:59.292 07:05:42 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:00.228 Initializing NVMe Controllers 00:13:00.228 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:00.228 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:00.228 Initialization complete. Launching workers. 00:13:00.228 submit (in ns) avg, min, max = 6924.6, 3348.2, 4029863.6 00:13:00.228 complete (in ns) avg, min, max = 25836.4, 1869.1, 4044360.0 00:13:00.228 00:13:00.228 Submit histogram 00:13:00.228 ================ 00:13:00.228 Range in us Cumulative Count 00:13:00.228 3.345 - 3.360: 0.0556% ( 7) 00:13:00.228 3.360 - 3.375: 0.1430% ( 11) 00:13:00.228 3.375 - 3.389: 0.2224% ( 10) 00:13:00.228 3.389 - 3.404: 0.3971% ( 22) 00:13:00.228 3.404 - 3.418: 0.5560% ( 20) 00:13:00.228 3.418 - 3.433: 0.8340% ( 35) 00:13:00.228 3.433 - 3.447: 1.4774% ( 81) 00:13:00.228 3.447 - 3.462: 3.3360% ( 234) 00:13:00.228 3.462 - 3.476: 6.7276% ( 427) 00:13:00.228 3.476 - 3.491: 12.8276% ( 768) 00:13:00.228 3.491 - 3.505: 19.6664% ( 861) 00:13:00.228 3.505 - 3.520: 27.1644% ( 944) 00:13:00.228 3.520 - 3.535: 33.9476% ( 854) 00:13:00.228 3.535 - 3.549: 40.8102% ( 864) 00:13:00.228 3.549 - 3.564: 47.3392% ( 822) 00:13:00.228 3.564 - 3.578: 53.7967% ( 813) 00:13:00.228 3.578 - 3.593: 60.0238% ( 784) 00:13:00.228 3.593 - 3.607: 65.9333% ( 744) 00:13:00.228 3.607 - 3.622: 70.5481% ( 581) 00:13:00.228 3.622 - 3.636: 73.9555% ( 429) 00:13:00.228 3.636 - 3.651: 76.5528% ( 327) 00:13:00.228 3.651 - 3.665: 78.0858% ( 193) 00:13:00.228 3.665 - 3.680: 79.4122% ( 167) 00:13:00.228 3.680 - 3.695: 80.5719% ( 146) 00:13:00.228 3.695 - 3.709: 81.5806% ( 127) 00:13:00.228 3.709 - 3.724: 82.3670% ( 99) 00:13:00.228 3.724 - 3.753: 83.9873% ( 204) 00:13:00.228 3.753 - 3.782: 85.2423% ( 158) 00:13:00.228 3.782 - 3.811: 86.6958% ( 183) 00:13:00.228 3.811 - 3.840: 87.9587% ( 159) 00:13:00.228 3.840 - 3.869: 89.3646% ( 177) 00:13:00.228 3.869 - 3.898: 91.1199% ( 221) 00:13:00.228 3.898 - 3.927: 92.7482% ( 205) 00:13:00.228 3.927 - 3.956: 94.4003% ( 208) 00:13:00.228 3.956 - 3.985: 95.7188% ( 166) 00:13:00.228 3.985 - 4.015: 96.4734% ( 95) 00:13:00.228 4.015 - 4.044: 96.9182% ( 56) 00:13:00.228 4.044 - 4.073: 97.1485% ( 29) 00:13:00.228 4.073 - 4.102: 97.3868% ( 30) 00:13:00.228 4.102 - 4.131: 97.4980% ( 14) 00:13:00.228 4.131 - 4.160: 97.6172% ( 15) 00:13:00.228 4.160 - 4.189: 97.6569% ( 5) 00:13:00.228 4.189 - 4.218: 97.6966% ( 5) 00:13:00.228 4.218 - 4.247: 97.7125% ( 2) 00:13:00.228 4.247 - 4.276: 97.7601% ( 6) 00:13:00.228 4.276 - 4.305: 97.7919% ( 4) 00:13:00.228 4.305 - 4.335: 97.8157% ( 3) 00:13:00.228 4.335 - 4.364: 97.8237% ( 1) 00:13:00.228 4.364 - 4.393: 97.8396% ( 2) 00:13:00.228 4.393 - 4.422: 97.8554% ( 2) 00:13:00.228 4.451 - 4.480: 97.8713% ( 2) 00:13:00.228 4.480 - 4.509: 97.8793% ( 1) 00:13:00.228 4.509 - 4.538: 97.8872% ( 1) 00:13:00.228 4.538 - 4.567: 97.8952% ( 1) 00:13:00.228 4.567 - 4.596: 97.9110% ( 2) 00:13:00.228 4.655 - 4.684: 97.9269% ( 2) 00:13:00.228 4.684 - 4.713: 97.9508% ( 3) 00:13:00.228 4.713 - 4.742: 97.9905% ( 5) 00:13:00.228 4.742 - 4.771: 98.0143% ( 3) 00:13:00.228 4.771 - 4.800: 98.0620% ( 6) 00:13:00.228 4.800 - 4.829: 98.1017% ( 5) 00:13:00.228 4.829 - 4.858: 98.1732% ( 9) 00:13:00.228 4.858 - 4.887: 98.2526% ( 10) 00:13:00.228 4.887 - 4.916: 98.2685% ( 2) 00:13:00.228 4.916 - 4.945: 98.3082% ( 5) 00:13:00.228 4.945 - 4.975: 98.3320% ( 3) 00:13:00.228 4.975 - 5.004: 98.3558% ( 3) 00:13:00.228 5.004 - 5.033: 98.4114% ( 7) 00:13:00.228 5.033 - 5.062: 98.4909% ( 10) 00:13:00.228 5.062 - 5.091: 98.4988% ( 1) 00:13:00.228 5.091 - 5.120: 98.5385% ( 5) 00:13:00.228 5.120 - 5.149: 98.5624% ( 3) 00:13:00.228 5.149 - 5.178: 98.5703% ( 1) 00:13:00.228 5.178 - 5.207: 98.6021% ( 4) 00:13:00.228 5.207 - 5.236: 98.6338% ( 4) 00:13:00.228 5.236 - 5.265: 98.6736% ( 5) 00:13:00.228 5.265 - 5.295: 98.7053% ( 4) 00:13:00.228 5.295 - 5.324: 98.7371% ( 4) 00:13:00.228 5.324 - 5.353: 98.7530% ( 2) 00:13:00.228 5.353 - 5.382: 98.7689% ( 2) 00:13:00.228 5.411 - 5.440: 98.7847% ( 2) 00:13:00.228 5.469 - 5.498: 98.7927% ( 1) 00:13:00.228 5.585 - 5.615: 98.8006% ( 1) 00:13:00.228 5.993 - 6.022: 98.8086% ( 1) 00:13:00.228 6.051 - 6.080: 98.8165% ( 1) 00:13:00.228 6.633 - 6.662: 98.8245% ( 1) 00:13:00.228 6.895 - 6.924: 98.8324% ( 1) 00:13:00.228 7.564 - 7.622: 98.8403% ( 1) 00:13:00.228 9.193 - 9.251: 98.8483% ( 1) 00:13:00.228 9.251 - 9.309: 98.8562% ( 1) 00:13:00.228 9.309 - 9.367: 98.8642% ( 1) 00:13:00.228 9.600 - 9.658: 98.8721% ( 1) 00:13:00.228 9.716 - 9.775: 98.8880% ( 2) 00:13:00.228 9.833 - 9.891: 98.9198% ( 4) 00:13:00.228 9.891 - 9.949: 98.9277% ( 1) 00:13:00.228 9.949 - 10.007: 98.9357% ( 1) 00:13:00.228 10.065 - 10.124: 98.9436% ( 1) 00:13:00.228 10.124 - 10.182: 98.9515% ( 1) 00:13:00.228 10.240 - 10.298: 98.9754% ( 3) 00:13:00.228 10.298 - 10.356: 98.9833% ( 1) 00:13:00.228 10.415 - 10.473: 98.9913% ( 1) 00:13:00.228 10.473 - 10.531: 98.9992% ( 1) 00:13:00.228 10.531 - 10.589: 99.0071% ( 1) 00:13:00.228 10.589 - 10.647: 99.0310% ( 3) 00:13:00.228 10.705 - 10.764: 99.0469% ( 2) 00:13:00.228 10.822 - 10.880: 99.0627% ( 2) 00:13:00.228 10.880 - 10.938: 99.0707% ( 1) 00:13:00.228 10.938 - 10.996: 99.0786% ( 1) 00:13:00.228 10.996 - 11.055: 99.0945% ( 2) 00:13:00.228 11.113 - 11.171: 99.1025% ( 1) 00:13:00.228 11.171 - 11.229: 99.1104% ( 1) 00:13:00.228 11.229 - 11.287: 99.1183% ( 1) 00:13:00.228 11.287 - 11.345: 99.1263% ( 1) 00:13:00.228 11.404 - 11.462: 99.1342% ( 1) 00:13:00.228 11.520 - 11.578: 99.1422% ( 1) 00:13:00.228 11.753 - 11.811: 99.1501% ( 1) 00:13:00.228 11.985 - 12.044: 99.1581% ( 1) 00:13:00.228 12.044 - 12.102: 99.1660% ( 1) 00:13:00.228 12.451 - 12.509: 99.1739% ( 1) 00:13:00.228 12.684 - 12.742: 99.1978% ( 3) 00:13:00.228 12.800 - 12.858: 99.2057% ( 1) 00:13:00.228 12.975 - 13.033: 99.2137% ( 1) 00:13:00.228 13.033 - 13.091: 99.2295% ( 2) 00:13:00.228 13.673 - 13.731: 99.2375% ( 1) 00:13:00.228 13.789 - 13.847: 99.2454% ( 1) 00:13:00.228 13.847 - 13.905: 99.2613% ( 2) 00:13:00.228 13.905 - 13.964: 99.2693% ( 1) 00:13:00.228 13.964 - 14.022: 99.2772% ( 1) 00:13:00.228 14.022 - 14.080: 99.2851% ( 1) 00:13:00.228 14.080 - 14.138: 99.2931% ( 1) 00:13:00.228 14.138 - 14.196: 99.3169% ( 3) 00:13:00.228 14.196 - 14.255: 99.3249% ( 1) 00:13:00.228 14.255 - 14.313: 99.3328% ( 1) 00:13:00.228 14.371 - 14.429: 99.3487% ( 2) 00:13:00.228 14.429 - 14.487: 99.3566% ( 1) 00:13:00.228 14.487 - 14.545: 99.3725% ( 2) 00:13:00.228 14.545 - 14.604: 99.3884% ( 2) 00:13:00.228 14.604 - 14.662: 99.4043% ( 2) 00:13:00.228 14.662 - 14.720: 99.4202% ( 2) 00:13:00.228 14.720 - 14.778: 99.4281% ( 1) 00:13:00.228 14.778 - 14.836: 99.4440% ( 2) 00:13:00.228 14.895 - 15.011: 99.4519% ( 1) 00:13:00.228 15.011 - 15.127: 99.4678% ( 2) 00:13:00.228 15.127 - 15.244: 99.4837% ( 2) 00:13:00.229 15.244 - 15.360: 99.5234% ( 5) 00:13:00.229 15.360 - 15.476: 99.5631% ( 5) 00:13:00.229 15.476 - 15.593: 99.5711% ( 1) 00:13:00.229 15.593 - 15.709: 99.5870% ( 2) 00:13:00.229 15.709 - 15.825: 99.6187% ( 4) 00:13:00.229 15.825 - 15.942: 99.6743% ( 7) 00:13:00.229 15.942 - 16.058: 99.7379% ( 8) 00:13:00.229 16.058 - 16.175: 99.7697% ( 4) 00:13:00.229 16.175 - 16.291: 99.7855% ( 2) 00:13:00.229 16.291 - 16.407: 99.8014% ( 2) 00:13:00.229 16.407 - 16.524: 99.8253% ( 3) 00:13:00.229 16.640 - 16.756: 99.8332% ( 1) 00:13:00.229 16.756 - 16.873: 99.8411% ( 1) 00:13:00.229 17.687 - 17.804: 99.8491% ( 1) 00:13:00.229 18.036 - 18.153: 99.8570% ( 1) 00:13:00.229 18.153 - 18.269: 99.8650% ( 1) 00:13:00.229 18.269 - 18.385: 99.8729% ( 1) 00:13:00.229 19.433 - 19.549: 99.8888% ( 2) 00:13:00.229 19.549 - 19.665: 99.8967% ( 1) 00:13:00.229 22.807 - 22.924: 99.9047% ( 1) 00:13:00.229 30.720 - 30.953: 99.9126% ( 1) 00:13:00.229 30.953 - 31.185: 99.9206% ( 1) 00:13:00.229 3991.738 - 4021.527: 99.9682% ( 6) 00:13:00.229 4021.527 - 4051.316: 100.0000% ( 4) 00:13:00.229 00:13:00.229 Complete histogram 00:13:00.229 ================== 00:13:00.229 Range in us Cumulative Count 00:13:00.229 1.862 - 1.876: 0.4369% ( 55) 00:13:00.229 1.876 - 1.891: 10.6831% ( 1290) 00:13:00.229 1.891 - 1.905: 32.4146% ( 2736) 00:13:00.229 1.905 - 1.920: 57.3948% ( 3145) 00:13:00.229 1.920 - 1.935: 74.2971% ( 2128) 00:13:00.229 1.935 - 1.949: 80.1191% ( 733) 00:13:00.229 1.949 - 1.964: 82.9071% ( 351) 00:13:00.229 1.964 - 1.978: 84.8133% ( 240) 00:13:00.229 1.978 - 1.993: 87.2677% ( 309) 00:13:00.229 1.993 - 2.007: 89.4361% ( 273) 00:13:00.229 2.007 - 2.022: 91.0405% ( 202) 00:13:00.229 2.022 - 2.036: 92.1366% ( 138) 00:13:00.229 2.036 - 2.051: 92.8912% ( 95) 00:13:00.229 2.051 - 2.065: 93.4710% ( 73) 00:13:00.229 2.065 - 2.080: 93.9476% ( 60) 00:13:00.229 2.080 - 2.095: 94.3209% ( 47) 00:13:00.229 2.095 - 2.109: 94.8054% ( 61) 00:13:00.229 2.109 - 2.124: 95.1708% ( 46) 00:13:00.229 2.124 - 2.138: 95.4805% ( 39) 00:13:00.229 2.138 - 2.153: 95.7585% ( 35) 00:13:00.229 2.153 - 2.167: 96.0127% ( 32) 00:13:00.229 2.167 - 2.182: 96.2351% ( 28) 00:13:00.229 2.182 - 2.196: 96.5290% ( 37) 00:13:00.229 2.196 - 2.211: 96.7752% ( 31) 00:13:00.229 2.211 - 2.225: 97.0214% ( 31) 00:13:00.229 2.225 - 2.240: 97.2994% ( 35) 00:13:00.229 2.240 - 2.255: 97.4901% ( 24) 00:13:00.229 2.255 - 2.269: 97.7760% ( 36) 00:13:00.229 2.269 - 2.284: 97.8793% ( 13) 00:13:00.229 2.284 - 2.298: 98.0064% ( 16) 00:13:00.229 2.298 - 2.313: 98.0778% ( 9) 00:13:00.229 2.313 - 2.327: 98.1334% ( 7) 00:13:00.229 2.327 - 2.342: 98.1890% ( 7) 00:13:00.229 2.342 - 2.356: 98.2208% ( 4) 00:13:00.229 2.356 - 2.371: 98.2605% ( 5) 00:13:00.229 2.371 - 2.385: 98.3002% ( 5) 00:13:00.229 2.385 - 2.400: 98.3400% ( 5) 00:13:00.229 2.400 - 2.415: 98.3638% ( 3) 00:13:00.229 2.415 - 2.429: 98.3797% ( 2) 00:13:00.229 2.429 - 2.444: 98.3956% ( 2) 00:13:00.229 2.444 - 2.458: 98.4194% ( 3) 00:13:00.229 2.531 - 2.545: 98.4353% ( 2) 00:13:00.229 2.676 - 2.691: 98.4512% ( 2) 00:13:00.229 4.015 - 4.044: 98.4591% ( 1) 00:13:00.229 4.044 - 4.073: 98.4750% ( 2) 00:13:00.229 4.073 - 4.102: 98.4988% ( 3) 00:13:00.229 4.102 - 4.131: 98.5147% ( 2) 00:13:00.229 4.131 - 4.160: 98.5226% ( 1) 00:13:00.229 4.160 - 4.189: 98.5306% ( 1) 00:13:00.229 4.189 - 4.218: 98.5385% ( 1) 00:13:00.229 4.276 - 4.305: 98.5465% ( 1) 00:13:00.229 4.305 - 4.335: 98.5544% ( 1) 00:13:00.229 4.335 - 4.364: 98.5624% ( 1) 00:13:00.229 4.393 - 4.422: 98.5703% ( 1) 00:13:00.229 4.422 - 4.451: 98.5782% ( 1) 00:13:00.229 4.480 - 4.509: 98.5862% ( 1) 00:13:00.229 4.509 - 4.538: 98.6021% ( 2) 00:13:00.229 4.538 - 4.567: 98.6180% ( 2) 00:13:00.229 4.567 - 4.596: 98.6259% ( 1) 00:13:00.229 4.596 - 4.625: 98.6338% ( 1) 00:13:00.229 4.684 - 4.713: 98.6418% ( 1) 00:13:00.229 4.713 - 4.742: 98.6497% ( 1) 00:13:00.229 4.742 - 4.771: 98.6577% ( 1) 00:13:00.229 4.771 - 4.800: 98.6815% ( 3) 00:13:00.229 5.295 - 5.324: 98.6894% ( 1) 00:13:00.229 5.324 - 5.353: 98.6974% ( 1) 00:13:00.229 5.353 - 5.382: 98.7053% ( 1) 00:13:00.229 5.440 - 5.469: 98.7133% ( 1) 00:13:00.229 5.469 - 5.498: 98.7212% ( 1) 00:13:00.229 5.585 - 5.615: 98.7292% ( 1) 00:13:00.229 7.622 - 7.680: 98.7371% ( 1) 00:13:00.229 7.796 - 7.855: 98.7530% ( 2) 00:13:00.229 7.855 - 7.913: 98.7689% ( 2) 00:13:00.229 8.087 - 8.145: 98.7768% ( 1) 00:13:00.229 8.145 - 8.204: 98.7927% ( 2) 00:13:00.229 8.204 - 8.262: 98.8006% ( 1) 00:13:00.229 8.320 - 8.378: 98.8165% ( 2) 00:13:00.229 8.436 - 8.495: 98.8483% ( 4) 00:13:00.229 8.495 - 8.553: 98.8562% ( 1) 00:13:00.229 8.553 - 8.611: 98.8642% ( 1) 00:13:00.229 8.611 - 8.669: 98.8721% ( 1) 00:13:00.229 8.669 - 8.727: 98.8880% ( 2) 00:13:00.229 8.727 - 8.785: 98.9118% ( 3) 00:13:00.229 8.785 - 8.844: 98.9357% ( 3) 00:13:00.229 8.960 - 9.018: 98.9515% ( 2) 00:13:00.229 9.018 - 9.076: 98.9833% ( 4) 00:13:00.229 9.076 - 9.135: 98.9913% ( 1) 00:13:00.229 9.135 - 9.193: 99.0071% ( 2) 00:13:00.229 9.193 - 9.251: 99.0230% ( 2) 00:13:00.229 9.251 - 9.309: 99.0310% ( 1) 00:13:00.229 9.309 - 9.367: 99.0389% ( 1) 00:13:00.229 9.367 - 9.425: 99.0548% ( 2) 00:13:00.229 9.484 - 9.542: 99.0627% ( 1) 00:13:00.229 9.542 - 9.600: 99.0866% ( 3) 00:13:00.229 9.600 - 9.658: 99.0945% ( 1) 00:13:00.229 9.775 - 9.833: 99.1025% ( 1) 00:13:00.229 9.833 - 9.891: 99.1342% ( 4) 00:13:00.229 9.891 - 9.949: 99.1422% ( 1) 00:13:00.229 9.949 - 10.007: 99.1501% ( 1) 00:13:00.229 10.473 - 10.531: 99.1581% ( 1) 00:13:00.229 10.647 - 10.705: 99.1660% ( 1) 00:13:00.229 10.764 - 10.822: 99.1739% ( 1) 00:13:00.229 11.055 - 11.113: 99.1819% ( 1) 00:13:00.229 11.113 - 11.171: 99.1898% ( 1) 00:13:00.229 11.578 - 11.636: 99.1978% ( 1) 00:13:00.229 11.636 - 11.695: 99.2137% ( 2) 00:13:00.229 11.927 - 11.985: 99.2216% ( 1) 00:13:00.229 12.625 - 12.684: 99.2295% ( 1) 00:13:00.229 12.684 - 12.742: 99.2375% ( 1) 00:13:00.229 12.916 - 12.975: 99.2454% ( 1) 00:13:00.229 13.731 - 13.789: 99.2534% ( 1) 00:13:00.229 14.604 - 14.662: 99.2613% ( 1) 00:13:00.229 15.011 - 15.127: 99.2693% ( 1) 00:13:00.229 15.360 - 15.476: 99.2772% ( 1) 00:13:00.229 16.291 - 16.407: 99.2931% ( 2) 00:13:00.229 16.640 - 16.756: 99.3090% ( 2) 00:13:00.229 16.756 - 16.873: 99.3169% ( 1) 00:13:00.229 16.873 - 16.989: 99.3249% ( 1) 00:13:00.229 17.338 - 17.455: 99.3328% ( 1) 00:13:00.229 17.455 - 17.571: 99.3487% ( 2) 00:13:00.229 17.687 - 17.804: 99.3566% ( 1) 00:13:00.229 18.036 - 18.153: 99.3646% ( 1) 00:13:00.229 21.178 - 21.295: 99.3725% ( 1) 00:13:00.229 21.644 - 21.760: 99.3805% ( 1) 00:13:00.229 28.276 - 28.393: 99.3884% ( 1) 00:13:00.229 35.840 - 36.073: 99.3963% ( 1) 00:13:00.229 3023.593 - 3038.487: 99.4043% ( 1) 00:13:00.229 3038.487 - 3053.382: 99.4281% ( 3) 00:13:00.229 3053.382 - 3068.276: 99.4361% ( 1) 00:13:00.229 3932.160 - 3961.949: 99.4440% ( 1) 00:13:00.229 3961.949 - 3991.738: 99.4678% ( 3) 00:13:00.229 3991.738 - 4021.527: 99.8729% ( 51) 00:13:00.229 4021.527 - 4051.316: 100.0000% ( 16) 00:13:00.229 00:13:00.229 07:05:44 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:00.229 07:05:44 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:00.229 07:05:44 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:00.229 07:05:44 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:00.229 07:05:44 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:00.488 [2024-07-11 07:05:44.381043] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:00.488 [ 00:13:00.488 { 00:13:00.488 "allow_any_host": true, 00:13:00.488 "hosts": [], 00:13:00.488 "listen_addresses": [], 00:13:00.488 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:00.488 "subtype": "Discovery" 00:13:00.488 }, 00:13:00.488 { 00:13:00.488 "allow_any_host": true, 00:13:00.488 "hosts": [], 00:13:00.488 "listen_addresses": [ 00:13:00.488 { 00:13:00.488 "adrfam": "IPv4", 00:13:00.488 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:00.488 "transport": "VFIOUSER", 00:13:00.488 "trsvcid": "0", 00:13:00.488 "trtype": "VFIOUSER" 00:13:00.488 } 00:13:00.488 ], 00:13:00.488 "max_cntlid": 65519, 00:13:00.488 "max_namespaces": 32, 00:13:00.488 "min_cntlid": 1, 00:13:00.488 "model_number": "SPDK bdev Controller", 00:13:00.488 "namespaces": [ 00:13:00.488 { 00:13:00.488 "bdev_name": "Malloc1", 00:13:00.488 "name": "Malloc1", 00:13:00.488 "nguid": "39A2B785B2F24480A3C9EAD19B0AE1BF", 00:13:00.488 "nsid": 1, 00:13:00.488 "uuid": "39a2b785-b2f2-4480-a3c9-ead19b0ae1bf" 00:13:00.489 } 00:13:00.489 ], 00:13:00.489 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:00.489 "serial_number": "SPDK1", 00:13:00.489 "subtype": "NVMe" 00:13:00.489 }, 00:13:00.489 { 00:13:00.489 "allow_any_host": true, 00:13:00.489 "hosts": [], 00:13:00.489 "listen_addresses": [ 00:13:00.489 { 00:13:00.489 "adrfam": "IPv4", 00:13:00.489 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:00.489 "transport": "VFIOUSER", 00:13:00.489 "trsvcid": "0", 00:13:00.489 "trtype": "VFIOUSER" 00:13:00.489 } 00:13:00.489 ], 00:13:00.489 "max_cntlid": 65519, 00:13:00.489 "max_namespaces": 32, 00:13:00.489 "min_cntlid": 1, 00:13:00.489 "model_number": "SPDK bdev Controller", 00:13:00.489 "namespaces": [ 00:13:00.489 { 00:13:00.489 "bdev_name": "Malloc2", 00:13:00.489 "name": "Malloc2", 00:13:00.489 "nguid": "478E9304D50F405DA3192309A442B8B9", 00:13:00.489 "nsid": 1, 00:13:00.489 "uuid": "478e9304-d50f-405d-a319-2309a442b8b9" 00:13:00.489 } 00:13:00.489 ], 00:13:00.489 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:00.489 "serial_number": "SPDK2", 00:13:00.489 "subtype": "NVMe" 00:13:00.489 } 00:13:00.489 ] 00:13:00.489 07:05:44 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:00.489 07:05:44 -- target/nvmf_vfio_user.sh@34 -- # aerpid=70660 00:13:00.489 07:05:44 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:00.489 07:05:44 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:00.489 07:05:44 -- common/autotest_common.sh@1244 -- # local i=0 00:13:00.489 07:05:44 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:00.489 07:05:44 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:13:00.489 07:05:44 -- common/autotest_common.sh@1247 -- # i=1 00:13:00.489 07:05:44 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:13:00.489 07:05:44 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:00.489 07:05:44 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:13:00.489 07:05:44 -- common/autotest_common.sh@1247 -- # i=2 00:13:00.489 07:05:44 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:13:00.747 07:05:44 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:00.747 07:05:44 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:00.747 07:05:44 -- common/autotest_common.sh@1255 -- # return 0 00:13:00.747 07:05:44 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:00.747 07:05:44 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:01.016 Malloc3 00:13:01.016 07:05:44 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:01.304 07:05:45 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:01.304 Asynchronous Event Request test 00:13:01.304 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:01.304 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:01.304 Registering asynchronous event callbacks... 00:13:01.304 Starting namespace attribute notice tests for all controllers... 00:13:01.304 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:01.304 aer_cb - Changed Namespace 00:13:01.304 Cleaning up... 00:13:01.304 [ 00:13:01.304 { 00:13:01.304 "allow_any_host": true, 00:13:01.304 "hosts": [], 00:13:01.304 "listen_addresses": [], 00:13:01.304 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:01.304 "subtype": "Discovery" 00:13:01.304 }, 00:13:01.304 { 00:13:01.304 "allow_any_host": true, 00:13:01.304 "hosts": [], 00:13:01.304 "listen_addresses": [ 00:13:01.304 { 00:13:01.304 "adrfam": "IPv4", 00:13:01.304 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:01.304 "transport": "VFIOUSER", 00:13:01.304 "trsvcid": "0", 00:13:01.304 "trtype": "VFIOUSER" 00:13:01.304 } 00:13:01.304 ], 00:13:01.304 "max_cntlid": 65519, 00:13:01.304 "max_namespaces": 32, 00:13:01.304 "min_cntlid": 1, 00:13:01.304 "model_number": "SPDK bdev Controller", 00:13:01.304 "namespaces": [ 00:13:01.304 { 00:13:01.304 "bdev_name": "Malloc1", 00:13:01.304 "name": "Malloc1", 00:13:01.304 "nguid": "39A2B785B2F24480A3C9EAD19B0AE1BF", 00:13:01.304 "nsid": 1, 00:13:01.304 "uuid": "39a2b785-b2f2-4480-a3c9-ead19b0ae1bf" 00:13:01.304 }, 00:13:01.304 { 00:13:01.304 "bdev_name": "Malloc3", 00:13:01.304 "name": "Malloc3", 00:13:01.304 "nguid": "1D22CB1AE99F4270BAC1EAAAE359DB58", 00:13:01.304 "nsid": 2, 00:13:01.304 "uuid": "1d22cb1a-e99f-4270-bac1-eaaae359db58" 00:13:01.304 } 00:13:01.304 ], 00:13:01.304 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:01.304 "serial_number": "SPDK1", 00:13:01.304 "subtype": "NVMe" 00:13:01.304 }, 00:13:01.304 { 00:13:01.304 "allow_any_host": true, 00:13:01.304 "hosts": [], 00:13:01.304 "listen_addresses": [ 00:13:01.304 { 00:13:01.304 "adrfam": "IPv4", 00:13:01.304 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:01.304 "transport": "VFIOUSER", 00:13:01.304 "trsvcid": "0", 00:13:01.304 "trtype": "VFIOUSER" 00:13:01.304 } 00:13:01.304 ], 00:13:01.304 "max_cntlid": 65519, 00:13:01.304 "max_namespaces": 32, 00:13:01.304 "min_cntlid": 1, 00:13:01.304 "model_number": "SPDK bdev Controller", 00:13:01.304 "namespaces": [ 00:13:01.304 { 00:13:01.304 "bdev_name": "Malloc2", 00:13:01.304 "name": "Malloc2", 00:13:01.304 "nguid": "478E9304D50F405DA3192309A442B8B9", 00:13:01.304 "nsid": 1, 00:13:01.304 "uuid": "478e9304-d50f-405d-a319-2309a442b8b9" 00:13:01.304 } 00:13:01.304 ], 00:13:01.304 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:01.304 "serial_number": "SPDK2", 00:13:01.304 "subtype": "NVMe" 00:13:01.304 } 00:13:01.304 ] 00:13:01.304 07:05:45 -- target/nvmf_vfio_user.sh@44 -- # wait 70660 00:13:01.304 07:05:45 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:01.304 07:05:45 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:01.304 07:05:45 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:01.304 07:05:45 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:01.304 [2024-07-11 07:05:45.312241] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:01.304 [2024-07-11 07:05:45.312303] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70694 ] 00:13:01.589 [2024-07-11 07:05:45.448664] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:01.589 [2024-07-11 07:05:45.459047] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:01.589 [2024-07-11 07:05:45.459094] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa774778000 00:13:01.589 [2024-07-11 07:05:45.460050] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:01.589 [2024-07-11 07:05:45.461048] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:01.589 [2024-07-11 07:05:45.463461] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:01.589 [2024-07-11 07:05:45.464058] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:01.589 [2024-07-11 07:05:45.465067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:01.589 [2024-07-11 07:05:45.466079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:01.589 [2024-07-11 07:05:45.467088] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:01.589 [2024-07-11 07:05:45.468087] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:01.589 [2024-07-11 07:05:45.469107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:01.589 [2024-07-11 07:05:45.469143] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa77476d000 00:13:01.589 [2024-07-11 07:05:45.470106] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:01.589 [2024-07-11 07:05:45.481769] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:01.589 [2024-07-11 07:05:45.481820] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:01.589 [2024-07-11 07:05:45.486954] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:01.589 [2024-07-11 07:05:45.487021] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:01.589 [2024-07-11 07:05:45.487098] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:01.589 [2024-07-11 07:05:45.487124] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:01.589 [2024-07-11 07:05:45.487131] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:01.589 [2024-07-11 07:05:45.487957] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:01.589 [2024-07-11 07:05:45.487975] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:01.589 [2024-07-11 07:05:45.487984] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:01.589 [2024-07-11 07:05:45.488955] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:01.589 [2024-07-11 07:05:45.488980] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:01.589 [2024-07-11 07:05:45.488993] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:01.589 [2024-07-11 07:05:45.489965] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:01.589 [2024-07-11 07:05:45.489991] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:01.589 [2024-07-11 07:05:45.490979] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:01.589 [2024-07-11 07:05:45.491004] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:01.589 [2024-07-11 07:05:45.491012] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:01.589 [2024-07-11 07:05:45.491021] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:01.589 [2024-07-11 07:05:45.491128] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:01.589 [2024-07-11 07:05:45.491134] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:01.589 [2024-07-11 07:05:45.491139] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:01.589 [2024-07-11 07:05:45.491992] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:01.589 [2024-07-11 07:05:45.492981] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:01.589 [2024-07-11 07:05:45.493989] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:01.589 [2024-07-11 07:05:45.495030] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:01.589 [2024-07-11 07:05:45.496000] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:01.590 [2024-07-11 07:05:45.496025] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:01.590 [2024-07-11 07:05:45.496032] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.496052] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:01.590 [2024-07-11 07:05:45.496068] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.496082] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:01.590 [2024-07-11 07:05:45.496089] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:01.590 [2024-07-11 07:05:45.496104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.503460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.503499] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:01.590 [2024-07-11 07:05:45.503506] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:01.590 [2024-07-11 07:05:45.503510] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:01.590 [2024-07-11 07:05:45.503514] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:01.590 [2024-07-11 07:05:45.503519] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:01.590 [2024-07-11 07:05:45.503524] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:01.590 [2024-07-11 07:05:45.503529] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.503545] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.503560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.511458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.511490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:01.590 [2024-07-11 07:05:45.511505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:01.590 [2024-07-11 07:05:45.511514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:01.590 [2024-07-11 07:05:45.511522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:01.590 [2024-07-11 07:05:45.511528] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.511543] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.511554] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.519458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.519479] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:01.590 [2024-07-11 07:05:45.519498] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.519507] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.519519] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.519532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.527458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.527524] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.527537] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.527547] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:01.590 [2024-07-11 07:05:45.527552] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:01.590 [2024-07-11 07:05:45.527559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.535460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.535507] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:01.590 [2024-07-11 07:05:45.535520] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.535531] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.535541] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:01.590 [2024-07-11 07:05:45.535546] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:01.590 [2024-07-11 07:05:45.535553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.543459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.543502] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.543516] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.543527] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:01.590 [2024-07-11 07:05:45.543532] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:01.590 [2024-07-11 07:05:45.543539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.551463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.551498] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.551510] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.551524] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.551533] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.551538] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.551543] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:01.590 [2024-07-11 07:05:45.551548] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:01.590 [2024-07-11 07:05:45.551553] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:01.590 [2024-07-11 07:05:45.551572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.559463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.559487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.567457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.567487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.575462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.575490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.582464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.582494] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:01.590 [2024-07-11 07:05:45.582502] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:01.590 [2024-07-11 07:05:45.582505] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:01.590 [2024-07-11 07:05:45.582508] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:01.590 [2024-07-11 07:05:45.582515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:01.590 [2024-07-11 07:05:45.582524] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:01.590 [2024-07-11 07:05:45.582529] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:01.590 [2024-07-11 07:05:45.582535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.582542] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:01.590 [2024-07-11 07:05:45.582547] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:01.590 [2024-07-11 07:05:45.582555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.582563] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:01.590 [2024-07-11 07:05:45.582568] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:01.590 [2024-07-11 07:05:45.582574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:01.590 [2024-07-11 07:05:45.589464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.589495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.589509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:01.590 [2024-07-11 07:05:45.589518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:01.590 ===================================================== 00:13:01.590 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:01.590 ===================================================== 00:13:01.590 Controller Capabilities/Features 00:13:01.591 ================================ 00:13:01.591 Vendor ID: 4e58 00:13:01.591 Subsystem Vendor ID: 4e58 00:13:01.591 Serial Number: SPDK2 00:13:01.591 Model Number: SPDK bdev Controller 00:13:01.591 Firmware Version: 24.01.1 00:13:01.591 Recommended Arb Burst: 6 00:13:01.591 IEEE OUI Identifier: 8d 6b 50 00:13:01.591 Multi-path I/O 00:13:01.591 May have multiple subsystem ports: Yes 00:13:01.591 May have multiple controllers: Yes 00:13:01.591 Associated with SR-IOV VF: No 00:13:01.591 Max Data Transfer Size: 131072 00:13:01.591 Max Number of Namespaces: 32 00:13:01.591 Max Number of I/O Queues: 127 00:13:01.591 NVMe Specification Version (VS): 1.3 00:13:01.591 NVMe Specification Version (Identify): 1.3 00:13:01.591 Maximum Queue Entries: 256 00:13:01.591 Contiguous Queues Required: Yes 00:13:01.591 Arbitration Mechanisms Supported 00:13:01.591 Weighted Round Robin: Not Supported 00:13:01.591 Vendor Specific: Not Supported 00:13:01.591 Reset Timeout: 15000 ms 00:13:01.591 Doorbell Stride: 4 bytes 00:13:01.591 NVM Subsystem Reset: Not Supported 00:13:01.591 Command Sets Supported 00:13:01.591 NVM Command Set: Supported 00:13:01.591 Boot Partition: Not Supported 00:13:01.591 Memory Page Size Minimum: 4096 bytes 00:13:01.591 Memory Page Size Maximum: 4096 bytes 00:13:01.591 Persistent Memory Region: Not Supported 00:13:01.591 Optional Asynchronous Events Supported 00:13:01.591 Namespace Attribute Notices: Supported 00:13:01.591 Firmware Activation Notices: Not Supported 00:13:01.591 ANA Change Notices: Not Supported 00:13:01.591 PLE Aggregate Log Change Notices: Not Supported 00:13:01.591 LBA Status Info Alert Notices: Not Supported 00:13:01.591 EGE Aggregate Log Change Notices: Not Supported 00:13:01.591 Normal NVM Subsystem Shutdown event: Not Supported 00:13:01.591 Zone Descriptor Change Notices: Not Supported 00:13:01.591 Discovery Log Change Notices: Not Supported 00:13:01.591 Controller Attributes 00:13:01.591 128-bit Host Identifier: Supported 00:13:01.591 Non-Operational Permissive Mode: Not Supported 00:13:01.591 NVM Sets: Not Supported 00:13:01.591 Read Recovery Levels: Not Supported 00:13:01.591 Endurance Groups: Not Supported 00:13:01.591 Predictable Latency Mode: Not Supported 00:13:01.591 Traffic Based Keep ALive: Not Supported 00:13:01.591 Namespace Granularity: Not Supported 00:13:01.591 SQ Associations: Not Supported 00:13:01.591 UUID List: Not Supported 00:13:01.591 Multi-Domain Subsystem: Not Supported 00:13:01.591 Fixed Capacity Management: Not Supported 00:13:01.591 Variable Capacity Management: Not Supported 00:13:01.591 Delete Endurance Group: Not Supported 00:13:01.591 Delete NVM Set: Not Supported 00:13:01.591 Extended LBA Formats Supported: Not Supported 00:13:01.591 Flexible Data Placement Supported: Not Supported 00:13:01.591 00:13:01.591 Controller Memory Buffer Support 00:13:01.591 ================================ 00:13:01.591 Supported: No 00:13:01.591 00:13:01.591 Persistent Memory Region Support 00:13:01.591 ================================ 00:13:01.591 Supported: No 00:13:01.591 00:13:01.591 Admin Command Set Attributes 00:13:01.591 ============================ 00:13:01.591 Security Send/Receive: Not Supported 00:13:01.591 Format NVM: Not Supported 00:13:01.591 Firmware Activate/Download: Not Supported 00:13:01.591 Namespace Management: Not Supported 00:13:01.591 Device Self-Test: Not Supported 00:13:01.591 Directives: Not Supported 00:13:01.591 NVMe-MI: Not Supported 00:13:01.591 Virtualization Management: Not Supported 00:13:01.591 Doorbell Buffer Config: Not Supported 00:13:01.591 Get LBA Status Capability: Not Supported 00:13:01.591 Command & Feature Lockdown Capability: Not Supported 00:13:01.591 Abort Command Limit: 4 00:13:01.591 Async Event Request Limit: 4 00:13:01.591 Number of Firmware Slots: N/A 00:13:01.591 Firmware Slot 1 Read-Only: N/A 00:13:01.591 Firmware Activation Without Reset: N/A 00:13:01.591 Multiple Update Detection Support: N/A 00:13:01.591 Firmware Update Granularity: No Information Provided 00:13:01.591 Per-Namespace SMART Log: No 00:13:01.591 Asymmetric Namespace Access Log Page: Not Supported 00:13:01.591 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:01.591 Command Effects Log Page: Supported 00:13:01.591 Get Log Page Extended Data: Supported 00:13:01.591 Telemetry Log Pages: Not Supported 00:13:01.591 Persistent Event Log Pages: Not Supported 00:13:01.591 Supported Log Pages Log Page: May Support 00:13:01.591 Commands Supported & Effects Log Page: Not Supported 00:13:01.591 Feature Identifiers & Effects Log Page:May Support 00:13:01.591 NVMe-MI Commands & Effects Log Page: May Support 00:13:01.591 Data Area 4 for Telemetry Log: Not Supported 00:13:01.591 Error Log Page Entries Supported: 128 00:13:01.591 Keep Alive: Supported 00:13:01.591 Keep Alive Granularity: 10000 ms 00:13:01.591 00:13:01.591 NVM Command Set Attributes 00:13:01.591 ========================== 00:13:01.591 Submission Queue Entry Size 00:13:01.591 Max: 64 00:13:01.591 Min: 64 00:13:01.591 Completion Queue Entry Size 00:13:01.591 Max: 16 00:13:01.591 Min: 16 00:13:01.591 Number of Namespaces: 32 00:13:01.591 Compare Command: Supported 00:13:01.591 Write Uncorrectable Command: Not Supported 00:13:01.591 Dataset Management Command: Supported 00:13:01.591 Write Zeroes Command: Supported 00:13:01.591 Set Features Save Field: Not Supported 00:13:01.591 Reservations: Not Supported 00:13:01.591 Timestamp: Not Supported 00:13:01.591 Copy: Supported 00:13:01.591 Volatile Write Cache: Present 00:13:01.591 Atomic Write Unit (Normal): 1 00:13:01.591 Atomic Write Unit (PFail): 1 00:13:01.591 Atomic Compare & Write Unit: 1 00:13:01.591 Fused Compare & Write: Supported 00:13:01.591 Scatter-Gather List 00:13:01.591 SGL Command Set: Supported (Dword aligned) 00:13:01.591 SGL Keyed: Not Supported 00:13:01.591 SGL Bit Bucket Descriptor: Not Supported 00:13:01.591 SGL Metadata Pointer: Not Supported 00:13:01.591 Oversized SGL: Not Supported 00:13:01.591 SGL Metadata Address: Not Supported 00:13:01.591 SGL Offset: Not Supported 00:13:01.591 Transport SGL Data Block: Not Supported 00:13:01.591 Replay Protected Memory Block: Not Supported 00:13:01.591 00:13:01.591 Firmware Slot Information 00:13:01.591 ========================= 00:13:01.591 Active slot: 1 00:13:01.591 Slot 1 Firmware Revision: 24.01.1 00:13:01.591 00:13:01.591 00:13:01.591 Commands Supported and Effects 00:13:01.591 ============================== 00:13:01.591 Admin Commands 00:13:01.591 -------------- 00:13:01.591 Get Log Page (02h): Supported 00:13:01.591 Identify (06h): Supported 00:13:01.591 Abort (08h): Supported 00:13:01.591 Set Features (09h): Supported 00:13:01.591 Get Features (0Ah): Supported 00:13:01.591 Asynchronous Event Request (0Ch): Supported 00:13:01.591 Keep Alive (18h): Supported 00:13:01.591 I/O Commands 00:13:01.591 ------------ 00:13:01.591 Flush (00h): Supported LBA-Change 00:13:01.591 Write (01h): Supported LBA-Change 00:13:01.591 Read (02h): Supported 00:13:01.591 Compare (05h): Supported 00:13:01.591 Write Zeroes (08h): Supported LBA-Change 00:13:01.591 Dataset Management (09h): Supported LBA-Change 00:13:01.591 Copy (19h): Supported LBA-Change 00:13:01.591 Unknown (79h): Supported LBA-Change 00:13:01.591 Unknown (7Ah): Supported 00:13:01.591 00:13:01.591 Error Log 00:13:01.591 ========= 00:13:01.591 00:13:01.591 Arbitration 00:13:01.591 =========== 00:13:01.591 Arbitration Burst: 1 00:13:01.591 00:13:01.591 Power Management 00:13:01.591 ================ 00:13:01.591 Number of Power States: 1 00:13:01.591 Current Power State: Power State #0 00:13:01.591 Power State #0: 00:13:01.591 Max Power: 0.00 W 00:13:01.591 Non-Operational State: Operational 00:13:01.591 Entry Latency: Not Reported 00:13:01.591 Exit Latency: Not Reported 00:13:01.591 Relative Read Throughput: 0 00:13:01.591 Relative Read Latency: 0 00:13:01.591 Relative Write Throughput: 0 00:13:01.591 Relative Write Latency: 0 00:13:01.591 Idle Power: Not Reported 00:13:01.591 Active Power: Not Reported 00:13:01.591 Non-Operational Permissive Mode: Not Supported 00:13:01.591 00:13:01.591 Health Information 00:13:01.591 ================== 00:13:01.591 Critical Warnings: 00:13:01.591 Available Spare Space: OK 00:13:01.591 Temperature: OK 00:13:01.591 Device Reliability: OK 00:13:01.591 Read Only: No 00:13:01.591 Volatile Memory Backup: OK 00:13:01.591 Current Temperature: 0 Kelvin[2024-07-11 07:05:45.589640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:01.591 [2024-07-11 07:05:45.597474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:01.591 [2024-07-11 07:05:45.597535] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:01.592 [2024-07-11 07:05:45.597549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:01.592 [2024-07-11 07:05:45.597556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:01.592 [2024-07-11 07:05:45.597562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:01.592 [2024-07-11 07:05:45.597568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:01.592 [2024-07-11 07:05:45.597640] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:01.592 [2024-07-11 07:05:45.597656] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:01.592 [2024-07-11 07:05:45.598709] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:01.592 [2024-07-11 07:05:45.598734] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:01.592 [2024-07-11 07:05:45.599672] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:01.592 [2024-07-11 07:05:45.599693] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:01.592 [2024-07-11 07:05:45.599744] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:01.592 [2024-07-11 07:05:45.600794] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:01.592 (-273 Celsius) 00:13:01.592 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:01.592 Available Spare: 0% 00:13:01.592 Available Spare Threshold: 0% 00:13:01.592 Life Percentage Used: 0% 00:13:01.592 Data Units Read: 0 00:13:01.592 Data Units Written: 0 00:13:01.592 Host Read Commands: 0 00:13:01.592 Host Write Commands: 0 00:13:01.592 Controller Busy Time: 0 minutes 00:13:01.592 Power Cycles: 0 00:13:01.592 Power On Hours: 0 hours 00:13:01.592 Unsafe Shutdowns: 0 00:13:01.592 Unrecoverable Media Errors: 0 00:13:01.592 Lifetime Error Log Entries: 0 00:13:01.592 Warning Temperature Time: 0 minutes 00:13:01.592 Critical Temperature Time: 0 minutes 00:13:01.592 00:13:01.592 Number of Queues 00:13:01.592 ================ 00:13:01.592 Number of I/O Submission Queues: 127 00:13:01.592 Number of I/O Completion Queues: 127 00:13:01.592 00:13:01.592 Active Namespaces 00:13:01.592 ================= 00:13:01.592 Namespace ID:1 00:13:01.592 Error Recovery Timeout: Unlimited 00:13:01.592 Command Set Identifier: NVM (00h) 00:13:01.592 Deallocate: Supported 00:13:01.592 Deallocated/Unwritten Error: Not Supported 00:13:01.592 Deallocated Read Value: Unknown 00:13:01.592 Deallocate in Write Zeroes: Not Supported 00:13:01.592 Deallocated Guard Field: 0xFFFF 00:13:01.592 Flush: Supported 00:13:01.592 Reservation: Supported 00:13:01.592 Namespace Sharing Capabilities: Multiple Controllers 00:13:01.592 Size (in LBAs): 131072 (0GiB) 00:13:01.592 Capacity (in LBAs): 131072 (0GiB) 00:13:01.592 Utilization (in LBAs): 131072 (0GiB) 00:13:01.592 NGUID: 478E9304D50F405DA3192309A442B8B9 00:13:01.592 UUID: 478e9304-d50f-405d-a319-2309a442b8b9 00:13:01.592 Thin Provisioning: Not Supported 00:13:01.592 Per-NS Atomic Units: Yes 00:13:01.592 Atomic Boundary Size (Normal): 0 00:13:01.592 Atomic Boundary Size (PFail): 0 00:13:01.592 Atomic Boundary Offset: 0 00:13:01.592 Maximum Single Source Range Length: 65535 00:13:01.592 Maximum Copy Length: 65535 00:13:01.592 Maximum Source Range Count: 1 00:13:01.592 NGUID/EUI64 Never Reused: No 00:13:01.592 Namespace Write Protected: No 00:13:01.592 Number of LBA Formats: 1 00:13:01.592 Current LBA Format: LBA Format #00 00:13:01.592 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:01.592 00:13:01.592 07:05:45 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:08.173 Initializing NVMe Controllers 00:13:08.173 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:08.173 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:08.173 Initialization complete. Launching workers. 00:13:08.173 ======================================================== 00:13:08.173 Latency(us) 00:13:08.173 Device Information : IOPS MiB/s Average min max 00:13:08.173 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39067.80 152.61 3276.38 984.09 9616.19 00:13:08.173 ======================================================== 00:13:08.173 Total : 39067.80 152.61 3276.38 984.09 9616.19 00:13:08.173 00:13:08.173 07:05:51 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:12.360 Initializing NVMe Controllers 00:13:12.360 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:12.360 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:12.360 Initialization complete. Launching workers. 00:13:12.360 ======================================================== 00:13:12.360 Latency(us) 00:13:12.360 Device Information : IOPS MiB/s Average min max 00:13:12.360 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39411.11 153.95 3247.65 1028.17 9590.48 00:13:12.360 ======================================================== 00:13:12.360 Total : 39411.11 153.95 3247.65 1028.17 9590.48 00:13:12.360 00:13:12.360 07:05:56 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:18.929 Initializing NVMe Controllers 00:13:18.929 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:18.929 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:18.929 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:18.929 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:18.929 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:18.929 Initialization complete. Launching workers. 00:13:18.929 Starting thread on core 2 00:13:18.929 Starting thread on core 3 00:13:18.929 Starting thread on core 1 00:13:18.929 07:06:01 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:21.460 Initializing NVMe Controllers 00:13:21.460 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:21.460 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:21.460 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:21.460 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:21.460 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:21.460 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:21.460 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:21.460 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:21.460 Initialization complete. Launching workers. 00:13:21.460 Starting thread on core 1 with urgent priority queue 00:13:21.460 Starting thread on core 2 with urgent priority queue 00:13:21.460 Starting thread on core 3 with urgent priority queue 00:13:21.460 Starting thread on core 0 with urgent priority queue 00:13:21.460 SPDK bdev Controller (SPDK2 ) core 0: 5420.33 IO/s 18.45 secs/100000 ios 00:13:21.460 SPDK bdev Controller (SPDK2 ) core 1: 5832.33 IO/s 17.15 secs/100000 ios 00:13:21.460 SPDK bdev Controller (SPDK2 ) core 2: 4702.33 IO/s 21.27 secs/100000 ios 00:13:21.460 SPDK bdev Controller (SPDK2 ) core 3: 4889.67 IO/s 20.45 secs/100000 ios 00:13:21.460 ======================================================== 00:13:21.460 00:13:21.460 07:06:05 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:21.460 Initializing NVMe Controllers 00:13:21.460 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:21.460 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:21.460 Namespace ID: 1 size: 0GB 00:13:21.460 Initialization complete. 00:13:21.460 INFO: using host memory buffer for IO 00:13:21.460 Hello world! 00:13:21.460 07:06:05 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:22.836 Initializing NVMe Controllers 00:13:22.836 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:22.836 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:22.836 Initialization complete. Launching workers. 00:13:22.836 submit (in ns) avg, min, max = 7689.0, 3268.2, 4018547.3 00:13:22.836 complete (in ns) avg, min, max = 27654.1, 1724.5, 7042810.9 00:13:22.836 00:13:22.836 Submit histogram 00:13:22.836 ================ 00:13:22.836 Range in us Cumulative Count 00:13:22.836 3.258 - 3.273: 0.0163% ( 2) 00:13:22.836 3.273 - 3.287: 0.1464% ( 16) 00:13:22.836 3.287 - 3.302: 0.5204% ( 46) 00:13:22.836 3.302 - 3.316: 1.3010% ( 96) 00:13:22.836 3.316 - 3.331: 5.3098% ( 493) 00:13:22.836 3.331 - 3.345: 11.2457% ( 730) 00:13:22.836 3.345 - 3.360: 19.4015% ( 1003) 00:13:22.836 3.360 - 3.375: 28.0371% ( 1062) 00:13:22.836 3.375 - 3.389: 37.1361% ( 1119) 00:13:22.836 3.389 - 3.404: 45.4789% ( 1026) 00:13:22.836 3.404 - 3.418: 51.8784% ( 787) 00:13:22.836 3.418 - 3.433: 57.9037% ( 741) 00:13:22.836 3.433 - 3.447: 63.8640% ( 733) 00:13:22.836 3.447 - 3.462: 68.8974% ( 619) 00:13:22.836 3.462 - 3.476: 73.8331% ( 607) 00:13:22.836 3.476 - 3.491: 76.6385% ( 345) 00:13:22.836 3.491 - 3.505: 78.1021% ( 180) 00:13:22.836 3.505 - 3.520: 79.0454% ( 116) 00:13:22.836 3.520 - 3.535: 80.0781% ( 127) 00:13:22.836 3.535 - 3.549: 80.9725% ( 110) 00:13:22.836 3.549 - 3.564: 81.8670% ( 110) 00:13:22.836 3.564 - 3.578: 82.7614% ( 110) 00:13:22.836 3.578 - 3.593: 83.6965% ( 115) 00:13:22.836 3.593 - 3.607: 84.7292% ( 127) 00:13:22.836 3.607 - 3.622: 85.6643% ( 115) 00:13:22.836 3.622 - 3.636: 86.5832% ( 113) 00:13:22.836 3.636 - 3.651: 87.2337% ( 80) 00:13:22.837 3.651 - 3.665: 88.0550% ( 101) 00:13:22.837 3.665 - 3.680: 88.6892% ( 78) 00:13:22.837 3.680 - 3.695: 89.1690% ( 59) 00:13:22.837 3.695 - 3.709: 89.6894% ( 64) 00:13:22.837 3.709 - 3.724: 90.3155% ( 77) 00:13:22.837 3.724 - 3.753: 92.2833% ( 242) 00:13:22.837 3.753 - 3.782: 94.2755% ( 245) 00:13:22.837 3.782 - 3.811: 95.4627% ( 146) 00:13:22.837 3.811 - 3.840: 96.6824% ( 150) 00:13:22.837 3.840 - 3.869: 97.2109% ( 65) 00:13:22.837 3.869 - 3.898: 97.4386% ( 28) 00:13:22.837 3.898 - 3.927: 97.6907% ( 31) 00:13:22.837 3.927 - 3.956: 97.9184% ( 28) 00:13:22.837 3.956 - 3.985: 98.0647% ( 18) 00:13:22.837 3.985 - 4.015: 98.1704% ( 13) 00:13:22.837 4.015 - 4.044: 98.2192% ( 6) 00:13:22.837 4.044 - 4.073: 98.2680% ( 6) 00:13:22.837 4.073 - 4.102: 98.2924% ( 3) 00:13:22.837 4.102 - 4.131: 98.3087% ( 2) 00:13:22.837 4.131 - 4.160: 98.3412% ( 4) 00:13:22.837 4.160 - 4.189: 98.3493% ( 1) 00:13:22.837 4.189 - 4.218: 98.3575% ( 1) 00:13:22.837 4.276 - 4.305: 98.3656% ( 1) 00:13:22.837 4.305 - 4.335: 98.3900% ( 3) 00:13:22.837 4.422 - 4.451: 98.3981% ( 1) 00:13:22.837 4.509 - 4.538: 98.4062% ( 1) 00:13:22.837 4.538 - 4.567: 98.4144% ( 1) 00:13:22.837 4.596 - 4.625: 98.4469% ( 4) 00:13:22.837 4.625 - 4.655: 98.4713% ( 3) 00:13:22.837 4.655 - 4.684: 98.5445% ( 9) 00:13:22.837 4.684 - 4.713: 98.5851% ( 5) 00:13:22.837 4.713 - 4.742: 98.6421% ( 7) 00:13:22.837 4.742 - 4.771: 98.6908% ( 6) 00:13:22.837 4.771 - 4.800: 98.7315% ( 5) 00:13:22.837 4.800 - 4.829: 98.7803% ( 6) 00:13:22.837 4.829 - 4.858: 98.8209% ( 5) 00:13:22.837 4.858 - 4.887: 98.9023% ( 10) 00:13:22.837 4.887 - 4.916: 98.9592% ( 7) 00:13:22.837 4.916 - 4.945: 99.0161% ( 7) 00:13:22.837 4.945 - 4.975: 99.0324% ( 2) 00:13:22.837 4.975 - 5.004: 99.0730% ( 5) 00:13:22.837 5.004 - 5.033: 99.1055% ( 4) 00:13:22.837 5.033 - 5.062: 99.1137% ( 1) 00:13:22.837 5.062 - 5.091: 99.1299% ( 2) 00:13:22.837 5.091 - 5.120: 99.1381% ( 1) 00:13:22.837 5.120 - 5.149: 99.1543% ( 2) 00:13:22.837 5.149 - 5.178: 99.1706% ( 2) 00:13:22.837 5.178 - 5.207: 99.2113% ( 5) 00:13:22.837 5.265 - 5.295: 99.2194% ( 1) 00:13:22.837 5.295 - 5.324: 99.2275% ( 1) 00:13:22.837 5.324 - 5.353: 99.2438% ( 2) 00:13:22.837 5.382 - 5.411: 99.2519% ( 1) 00:13:22.837 5.673 - 5.702: 99.2682% ( 2) 00:13:22.837 5.702 - 5.731: 99.2763% ( 1) 00:13:22.837 5.760 - 5.789: 99.2844% ( 1) 00:13:22.837 5.993 - 6.022: 99.2926% ( 1) 00:13:22.837 7.156 - 7.185: 99.3007% ( 1) 00:13:22.837 7.913 - 7.971: 99.3088% ( 1) 00:13:22.837 8.902 - 8.960: 99.3170% ( 1) 00:13:22.837 9.135 - 9.193: 99.3414% ( 3) 00:13:22.837 9.425 - 9.484: 99.3495% ( 1) 00:13:22.837 9.542 - 9.600: 99.3658% ( 2) 00:13:22.837 9.658 - 9.716: 99.3739% ( 1) 00:13:22.837 9.716 - 9.775: 99.3901% ( 2) 00:13:22.837 9.775 - 9.833: 99.4064% ( 2) 00:13:22.837 9.891 - 9.949: 99.4145% ( 1) 00:13:22.837 10.007 - 10.065: 99.4308% ( 2) 00:13:22.837 10.124 - 10.182: 99.4471% ( 2) 00:13:22.837 10.182 - 10.240: 99.4796% ( 4) 00:13:22.837 10.298 - 10.356: 99.4877% ( 1) 00:13:22.837 10.415 - 10.473: 99.5121% ( 3) 00:13:22.837 10.473 - 10.531: 99.5202% ( 1) 00:13:22.837 10.531 - 10.589: 99.5284% ( 1) 00:13:22.837 10.589 - 10.647: 99.5365% ( 1) 00:13:22.837 10.647 - 10.705: 99.5446% ( 1) 00:13:22.837 10.764 - 10.822: 99.5528% ( 1) 00:13:22.837 10.822 - 10.880: 99.5853% ( 4) 00:13:22.837 10.880 - 10.938: 99.5934% ( 1) 00:13:22.837 10.938 - 10.996: 99.6016% ( 1) 00:13:22.837 10.996 - 11.055: 99.6341% ( 4) 00:13:22.837 11.055 - 11.113: 99.6422% ( 1) 00:13:22.837 11.113 - 11.171: 99.6585% ( 2) 00:13:22.837 11.345 - 11.404: 99.6666% ( 1) 00:13:22.837 11.462 - 11.520: 99.6747% ( 1) 00:13:22.837 11.636 - 11.695: 99.6829% ( 1) 00:13:22.837 11.753 - 11.811: 99.6991% ( 2) 00:13:22.837 11.811 - 11.869: 99.7073% ( 1) 00:13:22.837 12.102 - 12.160: 99.7154% ( 1) 00:13:22.837 12.160 - 12.218: 99.7235% ( 1) 00:13:22.837 13.324 - 13.382: 99.7317% ( 1) 00:13:22.837 13.498 - 13.556: 99.7398% ( 1) 00:13:22.837 13.556 - 13.615: 99.7479% ( 1) 00:13:22.837 13.731 - 13.789: 99.7561% ( 1) 00:13:22.837 13.789 - 13.847: 99.7642% ( 1) 00:13:22.837 13.964 - 14.022: 99.7723% ( 1) 00:13:22.837 14.604 - 14.662: 99.7805% ( 1) 00:13:22.837 15.011 - 15.127: 99.7886% ( 1) 00:13:22.837 15.360 - 15.476: 99.7967% ( 1) 00:13:22.837 15.709 - 15.825: 99.8130% ( 2) 00:13:22.837 16.524 - 16.640: 99.8211% ( 1) 00:13:22.837 19.549 - 19.665: 99.8292% ( 1) 00:13:22.837 19.782 - 19.898: 99.8455% ( 2) 00:13:22.837 25.600 - 25.716: 99.8536% ( 1) 00:13:22.837 26.182 - 26.298: 99.8618% ( 1) 00:13:22.837 26.531 - 26.647: 99.8699% ( 1) 00:13:22.837 28.393 - 28.509: 99.8780% ( 1) 00:13:22.837 37.935 - 38.167: 99.8862% ( 1) 00:13:22.837 39.098 - 39.331: 99.8943% ( 1) 00:13:22.837 3038.487 - 3053.382: 99.9024% ( 1) 00:13:22.837 3991.738 - 4021.527: 100.0000% ( 12) 00:13:22.837 00:13:22.837 Complete histogram 00:13:22.837 ================== 00:13:22.837 Range in us Cumulative Count 00:13:22.837 1.724 - 1.731: 0.7156% ( 88) 00:13:22.837 1.731 - 1.738: 16.6450% ( 1959) 00:13:22.837 1.738 - 1.745: 55.6920% ( 4802) 00:13:22.837 1.745 - 1.753: 80.8505% ( 3094) 00:13:22.837 1.753 - 1.760: 85.5342% ( 576) 00:13:22.837 1.760 - 1.767: 86.4450% ( 112) 00:13:22.837 1.767 - 1.775: 86.9410% ( 61) 00:13:22.837 1.775 - 1.782: 87.1768% ( 29) 00:13:22.837 1.782 - 1.789: 87.4451% ( 33) 00:13:22.837 1.789 - 1.796: 87.9330% ( 60) 00:13:22.837 1.796 - 1.804: 88.3477% ( 51) 00:13:22.837 1.804 - 1.811: 88.5266% ( 22) 00:13:22.837 1.811 - 1.818: 88.6730% ( 18) 00:13:22.837 1.818 - 1.825: 88.7868% ( 14) 00:13:22.837 1.825 - 1.833: 89.0470% ( 32) 00:13:22.837 1.833 - 1.840: 90.7465% ( 209) 00:13:22.837 1.840 - 1.847: 93.2591% ( 309) 00:13:22.837 1.847 - 1.855: 94.3649% ( 136) 00:13:22.837 1.855 - 1.862: 94.6577% ( 36) 00:13:22.837 1.862 - 1.876: 95.2025% ( 67) 00:13:22.837 1.876 - 1.891: 96.0075% ( 99) 00:13:22.837 1.891 - 1.905: 96.8450% ( 103) 00:13:22.837 1.905 - 1.920: 97.3654% ( 64) 00:13:22.837 1.920 - 1.935: 97.6338% ( 33) 00:13:22.837 1.935 - 1.949: 97.7720% ( 17) 00:13:22.837 1.949 - 1.964: 97.8940% ( 15) 00:13:22.837 1.964 - 1.978: 98.0159% ( 15) 00:13:22.837 1.978 - 1.993: 98.2355% ( 27) 00:13:22.837 1.993 - 2.007: 98.2924% ( 7) 00:13:22.837 2.007 - 2.022: 98.3168% ( 3) 00:13:22.837 2.022 - 2.036: 98.3575% ( 5) 00:13:22.837 2.036 - 2.051: 98.4225% ( 8) 00:13:22.837 2.051 - 2.065: 98.4469% ( 3) 00:13:22.837 2.065 - 2.080: 98.4794% ( 4) 00:13:22.837 2.080 - 2.095: 98.4957% ( 2) 00:13:22.837 2.095 - 2.109: 98.5120% ( 2) 00:13:22.837 2.109 - 2.124: 98.5201% ( 1) 00:13:22.837 2.124 - 2.138: 98.5445% ( 3) 00:13:22.837 2.138 - 2.153: 98.5607% ( 2) 00:13:22.837 2.153 - 2.167: 98.6177% ( 7) 00:13:22.837 2.167 - 2.182: 98.6339% ( 2) 00:13:22.837 2.182 - 2.196: 98.6746% ( 5) 00:13:22.837 2.196 - 2.211: 98.7234% ( 6) 00:13:22.837 2.211 - 2.225: 98.7396% ( 2) 00:13:22.837 2.225 - 2.240: 98.7559% ( 2) 00:13:22.837 2.255 - 2.269: 98.7722% ( 2) 00:13:22.837 2.269 - 2.284: 98.7966% ( 3) 00:13:22.838 2.385 - 2.400: 98.8047% ( 1) 00:13:22.838 2.400 - 2.415: 98.8128% ( 1) 00:13:22.838 3.811 - 3.840: 98.8209% ( 1) 00:13:22.838 3.898 - 3.927: 98.8453% ( 3) 00:13:22.838 3.956 - 3.985: 98.8535% ( 1) 00:13:22.838 3.985 - 4.015: 98.8860% ( 4) 00:13:22.838 4.044 - 4.073: 98.8941% ( 1) 00:13:22.838 4.073 - 4.102: 98.9185% ( 3) 00:13:22.838 4.102 - 4.131: 98.9267% ( 1) 00:13:22.838 4.160 - 4.189: 98.9348% ( 1) 00:13:22.838 4.189 - 4.218: 98.9510% ( 2) 00:13:22.838 4.247 - 4.276: 98.9673% ( 2) 00:13:22.838 4.335 - 4.364: 98.9836% ( 2) 00:13:22.838 4.364 - 4.393: 98.9917% ( 1) 00:13:22.838 4.393 - 4.422: 98.9998% ( 1) 00:13:22.838 4.422 - 4.451: 99.0080% ( 1) 00:13:22.838 4.480 - 4.509: 99.0161% ( 1) 00:13:22.838 4.509 - 4.538: 99.0242% ( 1) 00:13:22.838 4.567 - 4.596: 99.0324% ( 1) 00:13:22.838 4.596 - 4.625: 99.0405% ( 1) 00:13:22.838 4.684 - 4.713: 99.0486% ( 1) 00:13:22.838 5.207 - 5.236: 99.0568% ( 1) 00:13:22.838 5.411 - 5.440: 99.0649% ( 1) 00:13:22.838 5.673 - 5.702: 99.0730% ( 1) 00:13:22.838 7.622 - 7.680: 99.0812% ( 1) 00:13:22.838 7.680 - 7.738: 99.0893% ( 1) 00:13:22.838 7.738 - 7.796: 99.1055% ( 2) 00:13:22.838 7.971 - 8.029: 99.1137% ( 1) 00:13:22.838 8.145 - 8.204: 99.1218% ( 1) 00:13:22.838 8.320 - 8.378: 99.1381% ( 2) 00:13:22.838 8.436 - 8.495: 99.1462% ( 1) 00:13:22.838 8.553 - 8.611: 99.1543% ( 1) 00:13:22.838 8.727 - 8.785: 99.1706% ( 2) 00:13:22.838 8.844 - 8.902: 99.1787% ( 1) 00:13:22.838 9.018 - 9.076: 99.1950% ( 2) 00:13:22.838 9.251 - 9.309: 99.2031% ( 1) 00:13:22.838 9.309 - 9.367: 99.2113% ( 1) 00:13:22.838 9.367 - 9.425: 99.2194% ( 1) 00:13:22.838 9.484 - 9.542: 99.2275% ( 1) 00:13:22.838 9.658 - 9.716: 99.2356% ( 1) 00:13:22.838 10.007 - 10.065: 99.2519% ( 2) 00:13:22.838 10.356 - 10.415: 99.2600% ( 1) 00:13:22.838 11.113 - 11.171: 99.2682% ( 1) 00:13:22.838 11.229 - 11.287: 99.2763% ( 1) 00:13:22.838 13.033 - 13.091: 99.2844% ( 1) 00:13:22.838 13.556 - 13.615: 99.2926% ( 1) 00:13:22.838 17.222 - 17.338: 99.3007% ( 1) 00:13:22.838 17.455 - 17.571: 99.3088% ( 1) 00:13:22.838 17.571 - 17.687: 99.3251% ( 2) 00:13:22.838 19.549 - 19.665: 99.3332% ( 1) 00:13:22.838 25.135 - 25.251: 99.3414% ( 1) 00:13:22.838 33.280 - 33.513: 99.3495% ( 1) 00:13:22.838 56.087 - 56.320: 99.3576% ( 1) 00:13:22.838 3023.593 - 3038.487: 99.3658% ( 1) 00:13:22.838 3038.487 - 3053.382: 99.3739% ( 1) 00:13:22.838 3961.949 - 3991.738: 99.4064% ( 4) 00:13:22.838 3991.738 - 4021.527: 99.9268% ( 64) 00:13:22.838 4021.527 - 4051.316: 99.9919% ( 8) 00:13:22.838 7030.225 - 7060.015: 100.0000% ( 1) 00:13:22.838 00:13:22.838 07:06:06 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:22.838 07:06:06 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:22.838 07:06:06 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:22.838 07:06:06 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:22.838 07:06:06 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:23.097 [ 00:13:23.097 { 00:13:23.097 "allow_any_host": true, 00:13:23.097 "hosts": [], 00:13:23.097 "listen_addresses": [], 00:13:23.097 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:23.097 "subtype": "Discovery" 00:13:23.097 }, 00:13:23.097 { 00:13:23.097 "allow_any_host": true, 00:13:23.097 "hosts": [], 00:13:23.097 "listen_addresses": [ 00:13:23.097 { 00:13:23.097 "adrfam": "IPv4", 00:13:23.097 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:23.097 "transport": "VFIOUSER", 00:13:23.097 "trsvcid": "0", 00:13:23.097 "trtype": "VFIOUSER" 00:13:23.097 } 00:13:23.097 ], 00:13:23.097 "max_cntlid": 65519, 00:13:23.097 "max_namespaces": 32, 00:13:23.097 "min_cntlid": 1, 00:13:23.097 "model_number": "SPDK bdev Controller", 00:13:23.097 "namespaces": [ 00:13:23.097 { 00:13:23.097 "bdev_name": "Malloc1", 00:13:23.097 "name": "Malloc1", 00:13:23.097 "nguid": "39A2B785B2F24480A3C9EAD19B0AE1BF", 00:13:23.097 "nsid": 1, 00:13:23.097 "uuid": "39a2b785-b2f2-4480-a3c9-ead19b0ae1bf" 00:13:23.097 }, 00:13:23.097 { 00:13:23.097 "bdev_name": "Malloc3", 00:13:23.097 "name": "Malloc3", 00:13:23.097 "nguid": "1D22CB1AE99F4270BAC1EAAAE359DB58", 00:13:23.097 "nsid": 2, 00:13:23.097 "uuid": "1d22cb1a-e99f-4270-bac1-eaaae359db58" 00:13:23.097 } 00:13:23.097 ], 00:13:23.097 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:23.097 "serial_number": "SPDK1", 00:13:23.097 "subtype": "NVMe" 00:13:23.097 }, 00:13:23.097 { 00:13:23.097 "allow_any_host": true, 00:13:23.097 "hosts": [], 00:13:23.097 "listen_addresses": [ 00:13:23.097 { 00:13:23.097 "adrfam": "IPv4", 00:13:23.097 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:23.097 "transport": "VFIOUSER", 00:13:23.097 "trsvcid": "0", 00:13:23.097 "trtype": "VFIOUSER" 00:13:23.097 } 00:13:23.097 ], 00:13:23.097 "max_cntlid": 65519, 00:13:23.097 "max_namespaces": 32, 00:13:23.097 "min_cntlid": 1, 00:13:23.097 "model_number": "SPDK bdev Controller", 00:13:23.097 "namespaces": [ 00:13:23.097 { 00:13:23.097 "bdev_name": "Malloc2", 00:13:23.097 "name": "Malloc2", 00:13:23.097 "nguid": "478E9304D50F405DA3192309A442B8B9", 00:13:23.097 "nsid": 1, 00:13:23.097 "uuid": "478e9304-d50f-405d-a319-2309a442b8b9" 00:13:23.097 } 00:13:23.097 ], 00:13:23.097 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:23.097 "serial_number": "SPDK2", 00:13:23.097 "subtype": "NVMe" 00:13:23.097 } 00:13:23.097 ] 00:13:23.097 07:06:07 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:23.097 07:06:07 -- target/nvmf_vfio_user.sh@34 -- # aerpid=70955 00:13:23.097 07:06:07 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:23.097 07:06:07 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:23.097 07:06:07 -- common/autotest_common.sh@1244 -- # local i=0 00:13:23.097 07:06:07 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:23.097 07:06:07 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:13:23.097 07:06:07 -- common/autotest_common.sh@1247 -- # i=1 00:13:23.097 07:06:07 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:13:23.356 07:06:07 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:23.356 07:06:07 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:13:23.356 07:06:07 -- common/autotest_common.sh@1247 -- # i=2 00:13:23.356 07:06:07 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:13:23.356 07:06:07 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:23.356 07:06:07 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:23.356 07:06:07 -- common/autotest_common.sh@1255 -- # return 0 00:13:23.356 07:06:07 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:23.356 07:06:07 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:23.615 Malloc4 00:13:23.615 07:06:07 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:23.873 07:06:07 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:23.873 Asynchronous Event Request test 00:13:23.873 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:23.873 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:23.873 Registering asynchronous event callbacks... 00:13:23.873 Starting namespace attribute notice tests for all controllers... 00:13:23.873 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:23.873 aer_cb - Changed Namespace 00:13:23.873 Cleaning up... 00:13:24.132 [ 00:13:24.132 { 00:13:24.132 "allow_any_host": true, 00:13:24.132 "hosts": [], 00:13:24.132 "listen_addresses": [], 00:13:24.132 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:24.132 "subtype": "Discovery" 00:13:24.132 }, 00:13:24.132 { 00:13:24.132 "allow_any_host": true, 00:13:24.132 "hosts": [], 00:13:24.132 "listen_addresses": [ 00:13:24.132 { 00:13:24.132 "adrfam": "IPv4", 00:13:24.132 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:24.132 "transport": "VFIOUSER", 00:13:24.132 "trsvcid": "0", 00:13:24.132 "trtype": "VFIOUSER" 00:13:24.132 } 00:13:24.132 ], 00:13:24.132 "max_cntlid": 65519, 00:13:24.132 "max_namespaces": 32, 00:13:24.132 "min_cntlid": 1, 00:13:24.132 "model_number": "SPDK bdev Controller", 00:13:24.132 "namespaces": [ 00:13:24.132 { 00:13:24.132 "bdev_name": "Malloc1", 00:13:24.132 "name": "Malloc1", 00:13:24.132 "nguid": "39A2B785B2F24480A3C9EAD19B0AE1BF", 00:13:24.132 "nsid": 1, 00:13:24.132 "uuid": "39a2b785-b2f2-4480-a3c9-ead19b0ae1bf" 00:13:24.132 }, 00:13:24.132 { 00:13:24.132 "bdev_name": "Malloc3", 00:13:24.132 "name": "Malloc3", 00:13:24.132 "nguid": "1D22CB1AE99F4270BAC1EAAAE359DB58", 00:13:24.132 "nsid": 2, 00:13:24.132 "uuid": "1d22cb1a-e99f-4270-bac1-eaaae359db58" 00:13:24.132 } 00:13:24.132 ], 00:13:24.132 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:24.132 "serial_number": "SPDK1", 00:13:24.132 "subtype": "NVMe" 00:13:24.132 }, 00:13:24.132 { 00:13:24.132 "allow_any_host": true, 00:13:24.132 "hosts": [], 00:13:24.132 "listen_addresses": [ 00:13:24.132 { 00:13:24.132 "adrfam": "IPv4", 00:13:24.132 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:24.132 "transport": "VFIOUSER", 00:13:24.132 "trsvcid": "0", 00:13:24.132 "trtype": "VFIOUSER" 00:13:24.132 } 00:13:24.132 ], 00:13:24.132 "max_cntlid": 65519, 00:13:24.132 "max_namespaces": 32, 00:13:24.132 "min_cntlid": 1, 00:13:24.132 "model_number": "SPDK bdev Controller", 00:13:24.132 "namespaces": [ 00:13:24.132 { 00:13:24.132 "bdev_name": "Malloc2", 00:13:24.132 "name": "Malloc2", 00:13:24.132 "nguid": "478E9304D50F405DA3192309A442B8B9", 00:13:24.132 "nsid": 1, 00:13:24.132 "uuid": "478e9304-d50f-405d-a319-2309a442b8b9" 00:13:24.132 }, 00:13:24.132 { 00:13:24.132 "bdev_name": "Malloc4", 00:13:24.132 "name": "Malloc4", 00:13:24.132 "nguid": "EE6B4F229077433F9818A9D19359766B", 00:13:24.132 "nsid": 2, 00:13:24.132 "uuid": "ee6b4f22-9077-433f-9818-a9d19359766b" 00:13:24.132 } 00:13:24.132 ], 00:13:24.132 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:24.132 "serial_number": "SPDK2", 00:13:24.132 "subtype": "NVMe" 00:13:24.132 } 00:13:24.132 ] 00:13:24.132 07:06:08 -- target/nvmf_vfio_user.sh@44 -- # wait 70955 00:13:24.132 07:06:08 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:24.132 07:06:08 -- target/nvmf_vfio_user.sh@95 -- # killprocess 70275 00:13:24.132 07:06:08 -- common/autotest_common.sh@926 -- # '[' -z 70275 ']' 00:13:24.133 07:06:08 -- common/autotest_common.sh@930 -- # kill -0 70275 00:13:24.133 07:06:08 -- common/autotest_common.sh@931 -- # uname 00:13:24.133 07:06:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:24.133 07:06:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70275 00:13:24.133 07:06:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:24.133 07:06:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:24.133 killing process with pid 70275 00:13:24.133 07:06:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70275' 00:13:24.133 07:06:08 -- common/autotest_common.sh@945 -- # kill 70275 00:13:24.133 [2024-07-11 07:06:08.121703] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:24.133 07:06:08 -- common/autotest_common.sh@950 -- # wait 70275 00:13:24.700 07:06:08 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:24.700 07:06:08 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:24.700 07:06:08 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:24.700 07:06:08 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:24.700 07:06:08 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:24.700 07:06:08 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=70997 00:13:24.700 07:06:08 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:24.700 Process pid: 70997 00:13:24.700 07:06:08 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 70997' 00:13:24.700 07:06:08 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:24.700 07:06:08 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 70997 00:13:24.700 07:06:08 -- common/autotest_common.sh@819 -- # '[' -z 70997 ']' 00:13:24.700 07:06:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.700 07:06:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:24.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.700 07:06:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.700 07:06:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:24.700 07:06:08 -- common/autotest_common.sh@10 -- # set +x 00:13:24.700 [2024-07-11 07:06:08.606565] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:24.701 [2024-07-11 07:06:08.607535] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:24.701 [2024-07-11 07:06:08.607633] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.701 [2024-07-11 07:06:08.739905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.959 [2024-07-11 07:06:08.814790] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:24.959 [2024-07-11 07:06:08.814943] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.959 [2024-07-11 07:06:08.814958] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.959 [2024-07-11 07:06:08.814967] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.959 [2024-07-11 07:06:08.815080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.959 [2024-07-11 07:06:08.815542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.959 [2024-07-11 07:06:08.815646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.959 [2024-07-11 07:06:08.815657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.959 [2024-07-11 07:06:08.927370] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:13:24.959 [2024-07-11 07:06:08.934668] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:13:24.959 [2024-07-11 07:06:08.934845] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:13:24.959 [2024-07-11 07:06:08.935664] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:24.960 [2024-07-11 07:06:08.935816] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:13:25.529 07:06:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:25.529 07:06:09 -- common/autotest_common.sh@852 -- # return 0 00:13:25.529 07:06:09 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:26.906 07:06:10 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:26.906 07:06:10 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:26.906 07:06:10 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:26.906 07:06:10 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:26.906 07:06:10 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:26.906 07:06:10 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:27.165 Malloc1 00:13:27.165 07:06:11 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:27.424 07:06:11 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:27.683 07:06:11 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:27.942 07:06:11 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:27.942 07:06:11 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:27.942 07:06:11 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:27.942 Malloc2 00:13:27.942 07:06:11 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:28.201 07:06:12 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:28.460 07:06:12 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:28.719 07:06:12 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:28.719 07:06:12 -- target/nvmf_vfio_user.sh@95 -- # killprocess 70997 00:13:28.719 07:06:12 -- common/autotest_common.sh@926 -- # '[' -z 70997 ']' 00:13:28.719 07:06:12 -- common/autotest_common.sh@930 -- # kill -0 70997 00:13:28.719 07:06:12 -- common/autotest_common.sh@931 -- # uname 00:13:28.719 07:06:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:28.719 07:06:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70997 00:13:28.719 killing process with pid 70997 00:13:28.719 07:06:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:28.719 07:06:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:28.719 07:06:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70997' 00:13:28.719 07:06:12 -- common/autotest_common.sh@945 -- # kill 70997 00:13:28.719 07:06:12 -- common/autotest_common.sh@950 -- # wait 70997 00:13:29.287 07:06:13 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:29.287 07:06:13 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:29.287 00:13:29.287 real 0m54.369s 00:13:29.287 user 3m34.268s 00:13:29.287 sys 0m3.515s 00:13:29.287 07:06:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.287 ************************************ 00:13:29.287 07:06:13 -- common/autotest_common.sh@10 -- # set +x 00:13:29.287 END TEST nvmf_vfio_user 00:13:29.287 ************************************ 00:13:29.287 07:06:13 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:29.287 07:06:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:29.287 07:06:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:29.287 07:06:13 -- common/autotest_common.sh@10 -- # set +x 00:13:29.287 ************************************ 00:13:29.287 START TEST nvmf_vfio_user_nvme_compliance 00:13:29.287 ************************************ 00:13:29.287 07:06:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:29.287 * Looking for test storage... 00:13:29.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:13:29.287 07:06:13 -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:29.288 07:06:13 -- nvmf/common.sh@7 -- # uname -s 00:13:29.288 07:06:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.288 07:06:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.288 07:06:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.288 07:06:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.288 07:06:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.288 07:06:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.288 07:06:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.288 07:06:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.288 07:06:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.288 07:06:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.288 07:06:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:13:29.288 07:06:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:13:29.288 07:06:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.288 07:06:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.288 07:06:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:29.288 07:06:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:29.288 07:06:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.288 07:06:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.288 07:06:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.288 07:06:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.288 07:06:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.288 07:06:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.288 07:06:13 -- paths/export.sh@5 -- # export PATH 00:13:29.288 07:06:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.288 07:06:13 -- nvmf/common.sh@46 -- # : 0 00:13:29.288 07:06:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:29.288 07:06:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:29.288 07:06:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:29.288 07:06:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.288 07:06:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.288 07:06:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:29.288 07:06:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:29.288 07:06:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:29.288 07:06:13 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:29.288 07:06:13 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:29.288 07:06:13 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:29.288 07:06:13 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:29.288 07:06:13 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:29.288 07:06:13 -- compliance/compliance.sh@20 -- # nvmfpid=71187 00:13:29.288 Process pid: 71187 00:13:29.288 07:06:13 -- compliance/compliance.sh@21 -- # echo 'Process pid: 71187' 00:13:29.288 07:06:13 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:29.288 07:06:13 -- compliance/compliance.sh@24 -- # waitforlisten 71187 00:13:29.288 07:06:13 -- common/autotest_common.sh@819 -- # '[' -z 71187 ']' 00:13:29.288 07:06:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.288 07:06:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:29.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.288 07:06:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.288 07:06:13 -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:29.288 07:06:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:29.288 07:06:13 -- common/autotest_common.sh@10 -- # set +x 00:13:29.288 [2024-07-11 07:06:13.271508] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:29.288 [2024-07-11 07:06:13.271598] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.547 [2024-07-11 07:06:13.409961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:29.547 [2024-07-11 07:06:13.495630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:29.547 [2024-07-11 07:06:13.495784] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.547 [2024-07-11 07:06:13.495799] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.547 [2024-07-11 07:06:13.495808] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.547 [2024-07-11 07:06:13.495924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.547 [2024-07-11 07:06:13.496294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.547 [2024-07-11 07:06:13.496327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.479 07:06:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:30.479 07:06:14 -- common/autotest_common.sh@852 -- # return 0 00:13:30.479 07:06:14 -- compliance/compliance.sh@26 -- # sleep 1 00:13:31.411 07:06:15 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:31.411 07:06:15 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:31.411 07:06:15 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:31.411 07:06:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.411 07:06:15 -- common/autotest_common.sh@10 -- # set +x 00:13:31.411 07:06:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.411 07:06:15 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:31.411 07:06:15 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:31.411 07:06:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.411 07:06:15 -- common/autotest_common.sh@10 -- # set +x 00:13:31.411 malloc0 00:13:31.411 07:06:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.411 07:06:15 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:31.411 07:06:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.411 07:06:15 -- common/autotest_common.sh@10 -- # set +x 00:13:31.411 07:06:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.411 07:06:15 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:31.411 07:06:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.411 07:06:15 -- common/autotest_common.sh@10 -- # set +x 00:13:31.411 07:06:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.411 07:06:15 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:31.411 07:06:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.411 07:06:15 -- common/autotest_common.sh@10 -- # set +x 00:13:31.411 07:06:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:31.411 07:06:15 -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:31.670 00:13:31.670 00:13:31.670 CUnit - A unit testing framework for C - Version 2.1-3 00:13:31.670 http://cunit.sourceforge.net/ 00:13:31.670 00:13:31.670 00:13:31.670 Suite: nvme_compliance 00:13:31.670 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-11 07:06:15.562274] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:31.670 [2024-07-11 07:06:15.562324] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:31.670 [2024-07-11 07:06:15.562333] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:31.670 passed 00:13:31.670 Test: admin_identify_ctrlr_verify_fused ...passed 00:13:31.932 Test: admin_identify_ns ...[2024-07-11 07:06:15.790468] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:31.932 [2024-07-11 07:06:15.798465] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:31.932 passed 00:13:31.932 Test: admin_get_features_mandatory_features ...passed 00:13:32.219 Test: admin_get_features_optional_features ...passed 00:13:32.219 Test: admin_set_features_number_of_queues ...passed 00:13:32.492 Test: admin_get_log_page_mandatory_logs ...passed 00:13:32.492 Test: admin_get_log_page_with_lpo ...[2024-07-11 07:06:16.385472] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:32.492 passed 00:13:32.492 Test: fabric_property_get ...passed 00:13:32.751 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-11 07:06:16.554249] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:32.751 passed 00:13:32.751 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-11 07:06:16.717462] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:32.751 [2024-07-11 07:06:16.733472] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:32.751 passed 00:13:33.010 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-11 07:06:16.818032] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:33.010 passed 00:13:33.010 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-11 07:06:16.975463] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:33.010 [2024-07-11 07:06:16.999458] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:33.010 passed 00:13:33.268 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-11 07:06:17.082831] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:33.268 [2024-07-11 07:06:17.082961] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:33.268 passed 00:13:33.268 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-11 07:06:17.255463] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:33.268 [2024-07-11 07:06:17.262491] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:33.268 [2024-07-11 07:06:17.270472] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:33.268 [2024-07-11 07:06:17.278486] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:33.526 passed 00:13:33.526 Test: admin_create_io_sq_verify_pc ...[2024-07-11 07:06:17.401477] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:33.526 passed 00:13:34.901 Test: admin_create_io_qp_max_qps ...[2024-07-11 07:06:18.623478] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:35.160 passed 00:13:35.418 Test: admin_create_io_sq_shared_cq ...[2024-07-11 07:06:19.220461] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:35.418 passed 00:13:35.418 00:13:35.418 Run Summary: Type Total Ran Passed Failed Inactive 00:13:35.418 suites 1 1 n/a 0 0 00:13:35.418 tests 18 18 18 0 0 00:13:35.418 asserts 360 360 360 0 n/a 00:13:35.418 00:13:35.418 Elapsed time = 1.524 seconds 00:13:35.418 07:06:19 -- compliance/compliance.sh@42 -- # killprocess 71187 00:13:35.418 07:06:19 -- common/autotest_common.sh@926 -- # '[' -z 71187 ']' 00:13:35.418 07:06:19 -- common/autotest_common.sh@930 -- # kill -0 71187 00:13:35.418 07:06:19 -- common/autotest_common.sh@931 -- # uname 00:13:35.418 07:06:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:35.418 07:06:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71187 00:13:35.418 07:06:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:35.418 07:06:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:35.418 killing process with pid 71187 00:13:35.418 07:06:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71187' 00:13:35.418 07:06:19 -- common/autotest_common.sh@945 -- # kill 71187 00:13:35.418 07:06:19 -- common/autotest_common.sh@950 -- # wait 71187 00:13:35.677 07:06:19 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:35.677 07:06:19 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:35.677 00:13:35.677 real 0m6.563s 00:13:35.677 user 0m18.458s 00:13:35.677 sys 0m0.550s 00:13:35.677 07:06:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.677 07:06:19 -- common/autotest_common.sh@10 -- # set +x 00:13:35.677 ************************************ 00:13:35.677 END TEST nvmf_vfio_user_nvme_compliance 00:13:35.677 ************************************ 00:13:35.677 07:06:19 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:35.677 07:06:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:35.677 07:06:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:35.677 07:06:19 -- common/autotest_common.sh@10 -- # set +x 00:13:35.677 ************************************ 00:13:35.677 START TEST nvmf_vfio_user_fuzz 00:13:35.677 ************************************ 00:13:35.677 07:06:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:35.936 * Looking for test storage... 00:13:35.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:35.936 07:06:19 -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:35.936 07:06:19 -- nvmf/common.sh@7 -- # uname -s 00:13:35.936 07:06:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.936 07:06:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.936 07:06:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.936 07:06:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.936 07:06:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.936 07:06:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.936 07:06:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.936 07:06:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.936 07:06:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.936 07:06:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.936 07:06:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:13:35.936 07:06:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:13:35.936 07:06:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.936 07:06:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.936 07:06:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:35.936 07:06:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:35.936 07:06:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.936 07:06:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.936 07:06:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.936 07:06:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.936 07:06:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.936 07:06:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.936 07:06:19 -- paths/export.sh@5 -- # export PATH 00:13:35.936 07:06:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.936 07:06:19 -- nvmf/common.sh@46 -- # : 0 00:13:35.936 07:06:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:35.936 07:06:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:35.936 07:06:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:35.936 07:06:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.936 07:06:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.936 07:06:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:35.936 07:06:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:35.936 07:06:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:35.936 07:06:19 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:35.936 07:06:19 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:35.936 07:06:19 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:35.936 07:06:19 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:35.936 07:06:19 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:35.936 07:06:19 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:35.936 07:06:19 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:35.936 07:06:19 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=71333 00:13:35.936 Process pid: 71333 00:13:35.936 07:06:19 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 71333' 00:13:35.936 07:06:19 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:35.936 07:06:19 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 71333 00:13:35.936 07:06:19 -- common/autotest_common.sh@819 -- # '[' -z 71333 ']' 00:13:35.936 07:06:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.936 07:06:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:35.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.937 07:06:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.937 07:06:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:35.937 07:06:19 -- common/autotest_common.sh@10 -- # set +x 00:13:35.937 07:06:19 -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:36.870 07:06:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:36.870 07:06:20 -- common/autotest_common.sh@852 -- # return 0 00:13:36.870 07:06:20 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:37.805 07:06:21 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:37.805 07:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.805 07:06:21 -- common/autotest_common.sh@10 -- # set +x 00:13:37.805 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.805 07:06:21 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:37.805 07:06:21 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:37.805 07:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.805 07:06:21 -- common/autotest_common.sh@10 -- # set +x 00:13:37.805 malloc0 00:13:37.805 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.805 07:06:21 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:37.805 07:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.805 07:06:21 -- common/autotest_common.sh@10 -- # set +x 00:13:38.063 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.063 07:06:21 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:38.063 07:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.063 07:06:21 -- common/autotest_common.sh@10 -- # set +x 00:13:38.063 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.063 07:06:21 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:38.063 07:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.063 07:06:21 -- common/autotest_common.sh@10 -- # set +x 00:13:38.063 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.063 07:06:21 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:38.063 07:06:21 -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:38.322 Shutting down the fuzz application 00:13:38.322 07:06:22 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:38.322 07:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.322 07:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:38.322 07:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.322 07:06:22 -- target/vfio_user_fuzz.sh@46 -- # killprocess 71333 00:13:38.322 07:06:22 -- common/autotest_common.sh@926 -- # '[' -z 71333 ']' 00:13:38.322 07:06:22 -- common/autotest_common.sh@930 -- # kill -0 71333 00:13:38.322 07:06:22 -- common/autotest_common.sh@931 -- # uname 00:13:38.322 07:06:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:38.322 07:06:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71333 00:13:38.322 07:06:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:38.322 07:06:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:38.322 killing process with pid 71333 00:13:38.322 07:06:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71333' 00:13:38.322 07:06:22 -- common/autotest_common.sh@945 -- # kill 71333 00:13:38.322 07:06:22 -- common/autotest_common.sh@950 -- # wait 71333 00:13:38.888 07:06:22 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:38.888 07:06:22 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:38.888 00:13:38.888 real 0m2.915s 00:13:38.888 user 0m3.074s 00:13:38.888 sys 0m0.441s 00:13:38.888 07:06:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.888 07:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:38.888 ************************************ 00:13:38.888 END TEST nvmf_vfio_user_fuzz 00:13:38.888 ************************************ 00:13:38.888 07:06:22 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:38.888 07:06:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:38.888 07:06:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.888 07:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:38.888 ************************************ 00:13:38.888 START TEST nvmf_host_management 00:13:38.888 ************************************ 00:13:38.888 07:06:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:38.888 * Looking for test storage... 00:13:38.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:38.888 07:06:22 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:38.888 07:06:22 -- nvmf/common.sh@7 -- # uname -s 00:13:38.888 07:06:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.888 07:06:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.888 07:06:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.888 07:06:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.888 07:06:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.888 07:06:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.888 07:06:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.888 07:06:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.888 07:06:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.888 07:06:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.888 07:06:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:13:38.888 07:06:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:13:38.888 07:06:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.888 07:06:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.888 07:06:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:38.888 07:06:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:38.888 07:06:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.888 07:06:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.888 07:06:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.888 07:06:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.888 07:06:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.888 07:06:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.888 07:06:22 -- paths/export.sh@5 -- # export PATH 00:13:38.888 07:06:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.888 07:06:22 -- nvmf/common.sh@46 -- # : 0 00:13:38.888 07:06:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:38.888 07:06:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:38.888 07:06:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:38.888 07:06:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.888 07:06:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.888 07:06:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:38.888 07:06:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:38.888 07:06:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:38.888 07:06:22 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:38.888 07:06:22 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:38.888 07:06:22 -- target/host_management.sh@104 -- # nvmftestinit 00:13:38.888 07:06:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:38.888 07:06:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.888 07:06:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:38.888 07:06:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:38.888 07:06:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:38.888 07:06:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.888 07:06:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.888 07:06:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.888 07:06:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:38.888 07:06:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:38.888 07:06:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:38.889 07:06:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:38.889 07:06:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:38.889 07:06:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:38.889 07:06:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.889 07:06:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.889 07:06:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:38.889 07:06:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:38.889 07:06:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:38.889 07:06:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:38.889 07:06:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:38.889 07:06:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.889 07:06:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:38.889 07:06:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:38.889 07:06:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:38.889 07:06:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:38.889 07:06:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:38.889 07:06:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:38.889 Cannot find device "nvmf_tgt_br" 00:13:38.889 07:06:22 -- nvmf/common.sh@154 -- # true 00:13:38.889 07:06:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:38.889 Cannot find device "nvmf_tgt_br2" 00:13:38.889 07:06:22 -- nvmf/common.sh@155 -- # true 00:13:38.889 07:06:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:38.889 07:06:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:38.889 Cannot find device "nvmf_tgt_br" 00:13:38.889 07:06:22 -- nvmf/common.sh@157 -- # true 00:13:38.889 07:06:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:38.889 Cannot find device "nvmf_tgt_br2" 00:13:38.889 07:06:22 -- nvmf/common.sh@158 -- # true 00:13:38.889 07:06:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:38.889 07:06:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:38.889 07:06:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:38.889 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:38.889 07:06:22 -- nvmf/common.sh@161 -- # true 00:13:38.889 07:06:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:38.889 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:38.889 07:06:22 -- nvmf/common.sh@162 -- # true 00:13:38.889 07:06:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:39.147 07:06:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:39.147 07:06:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:39.147 07:06:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:39.147 07:06:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:39.147 07:06:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:39.147 07:06:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:39.147 07:06:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:39.147 07:06:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:39.147 07:06:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:39.147 07:06:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:39.147 07:06:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:39.147 07:06:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:39.147 07:06:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:39.147 07:06:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:39.147 07:06:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:39.147 07:06:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:39.147 07:06:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:39.147 07:06:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:39.147 07:06:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:39.147 07:06:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:39.147 07:06:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:39.147 07:06:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:39.147 07:06:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:39.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:13:39.147 00:13:39.147 --- 10.0.0.2 ping statistics --- 00:13:39.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.147 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:39.147 07:06:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:39.147 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:39.147 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:13:39.147 00:13:39.147 --- 10.0.0.3 ping statistics --- 00:13:39.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.147 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:39.148 07:06:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:39.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:39.148 00:13:39.148 --- 10.0.0.1 ping statistics --- 00:13:39.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.148 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:39.148 07:06:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.148 07:06:23 -- nvmf/common.sh@421 -- # return 0 00:13:39.148 07:06:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:39.148 07:06:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.148 07:06:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:39.148 07:06:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:39.148 07:06:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.148 07:06:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:39.148 07:06:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:39.148 07:06:23 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:39.148 07:06:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:39.148 07:06:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:39.148 07:06:23 -- common/autotest_common.sh@10 -- # set +x 00:13:39.148 ************************************ 00:13:39.148 START TEST nvmf_host_management 00:13:39.148 ************************************ 00:13:39.148 07:06:23 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:13:39.148 07:06:23 -- target/host_management.sh@69 -- # starttarget 00:13:39.148 07:06:23 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:39.148 07:06:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:39.148 07:06:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:39.148 07:06:23 -- common/autotest_common.sh@10 -- # set +x 00:13:39.148 07:06:23 -- nvmf/common.sh@469 -- # nvmfpid=71571 00:13:39.148 07:06:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:39.148 07:06:23 -- nvmf/common.sh@470 -- # waitforlisten 71571 00:13:39.148 07:06:23 -- common/autotest_common.sh@819 -- # '[' -z 71571 ']' 00:13:39.148 07:06:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.148 07:06:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:39.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.148 07:06:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.148 07:06:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:39.148 07:06:23 -- common/autotest_common.sh@10 -- # set +x 00:13:39.415 [2024-07-11 07:06:23.211866] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:39.415 [2024-07-11 07:06:23.211949] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.415 [2024-07-11 07:06:23.356576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.415 [2024-07-11 07:06:23.467172] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:39.415 [2024-07-11 07:06:23.467670] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.415 [2024-07-11 07:06:23.467787] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.415 [2024-07-11 07:06:23.467879] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.415 [2024-07-11 07:06:23.468159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.415 [2024-07-11 07:06:23.468312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.415 [2024-07-11 07:06:23.469538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:39.415 [2024-07-11 07:06:23.469560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.347 07:06:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:40.347 07:06:24 -- common/autotest_common.sh@852 -- # return 0 00:13:40.347 07:06:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:40.347 07:06:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:40.347 07:06:24 -- common/autotest_common.sh@10 -- # set +x 00:13:40.347 07:06:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.347 07:06:24 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.347 07:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.347 07:06:24 -- common/autotest_common.sh@10 -- # set +x 00:13:40.347 [2024-07-11 07:06:24.259322] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.347 07:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.347 07:06:24 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:40.347 07:06:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:40.347 07:06:24 -- common/autotest_common.sh@10 -- # set +x 00:13:40.347 07:06:24 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:40.347 07:06:24 -- target/host_management.sh@23 -- # cat 00:13:40.347 07:06:24 -- target/host_management.sh@30 -- # rpc_cmd 00:13:40.347 07:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.347 07:06:24 -- common/autotest_common.sh@10 -- # set +x 00:13:40.347 Malloc0 00:13:40.347 [2024-07-11 07:06:24.343848] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.347 07:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.347 07:06:24 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:40.347 07:06:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:40.347 07:06:24 -- common/autotest_common.sh@10 -- # set +x 00:13:40.347 07:06:24 -- target/host_management.sh@73 -- # perfpid=71643 00:13:40.347 07:06:24 -- target/host_management.sh@74 -- # waitforlisten 71643 /var/tmp/bdevperf.sock 00:13:40.347 07:06:24 -- common/autotest_common.sh@819 -- # '[' -z 71643 ']' 00:13:40.347 07:06:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:40.347 07:06:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:40.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:40.347 07:06:24 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:40.347 07:06:24 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:40.347 07:06:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:40.347 07:06:24 -- nvmf/common.sh@520 -- # config=() 00:13:40.347 07:06:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:40.347 07:06:24 -- nvmf/common.sh@520 -- # local subsystem config 00:13:40.347 07:06:24 -- common/autotest_common.sh@10 -- # set +x 00:13:40.347 07:06:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:40.347 07:06:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:40.347 { 00:13:40.347 "params": { 00:13:40.348 "name": "Nvme$subsystem", 00:13:40.348 "trtype": "$TEST_TRANSPORT", 00:13:40.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:40.348 "adrfam": "ipv4", 00:13:40.348 "trsvcid": "$NVMF_PORT", 00:13:40.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:40.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:40.348 "hdgst": ${hdgst:-false}, 00:13:40.348 "ddgst": ${ddgst:-false} 00:13:40.348 }, 00:13:40.348 "method": "bdev_nvme_attach_controller" 00:13:40.348 } 00:13:40.348 EOF 00:13:40.348 )") 00:13:40.348 07:06:24 -- nvmf/common.sh@542 -- # cat 00:13:40.348 07:06:24 -- nvmf/common.sh@544 -- # jq . 00:13:40.605 07:06:24 -- nvmf/common.sh@545 -- # IFS=, 00:13:40.605 07:06:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:40.605 "params": { 00:13:40.605 "name": "Nvme0", 00:13:40.605 "trtype": "tcp", 00:13:40.605 "traddr": "10.0.0.2", 00:13:40.605 "adrfam": "ipv4", 00:13:40.605 "trsvcid": "4420", 00:13:40.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:40.605 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:40.605 "hdgst": false, 00:13:40.605 "ddgst": false 00:13:40.605 }, 00:13:40.605 "method": "bdev_nvme_attach_controller" 00:13:40.605 }' 00:13:40.605 [2024-07-11 07:06:24.454695] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:40.605 [2024-07-11 07:06:24.454803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71643 ] 00:13:40.605 [2024-07-11 07:06:24.595196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.863 [2024-07-11 07:06:24.705623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.863 Running I/O for 10 seconds... 00:13:41.428 07:06:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:41.428 07:06:25 -- common/autotest_common.sh@852 -- # return 0 00:13:41.428 07:06:25 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:41.428 07:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.428 07:06:25 -- common/autotest_common.sh@10 -- # set +x 00:13:41.428 07:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.428 07:06:25 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:41.428 07:06:25 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:41.428 07:06:25 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:41.428 07:06:25 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:41.428 07:06:25 -- target/host_management.sh@52 -- # local ret=1 00:13:41.428 07:06:25 -- target/host_management.sh@53 -- # local i 00:13:41.428 07:06:25 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:41.428 07:06:25 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:41.428 07:06:25 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:41.428 07:06:25 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:41.428 07:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.428 07:06:25 -- common/autotest_common.sh@10 -- # set +x 00:13:41.688 07:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.688 07:06:25 -- target/host_management.sh@55 -- # read_io_count=2094 00:13:41.688 07:06:25 -- target/host_management.sh@58 -- # '[' 2094 -ge 100 ']' 00:13:41.688 07:06:25 -- target/host_management.sh@59 -- # ret=0 00:13:41.688 07:06:25 -- target/host_management.sh@60 -- # break 00:13:41.688 07:06:25 -- target/host_management.sh@64 -- # return 0 00:13:41.688 07:06:25 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:41.688 07:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.688 07:06:25 -- common/autotest_common.sh@10 -- # set +x 00:13:41.688 [2024-07-11 07:06:25.533751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.533993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.534509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d200b0 is same with the state(5) to be set 00:13:41.688 [2024-07-11 07:06:25.536202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.688 [2024-07-11 07:06:25.536272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.688 [2024-07-11 07:06:25.536306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.688 [2024-07-11 07:06:25.536316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.688 [2024-07-11 07:06:25.536327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.688 [2024-07-11 07:06:25.536336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.688 [2024-07-11 07:06:25.536346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.688 [2024-07-11 07:06:25.536355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.688 [2024-07-11 07:06:25.536365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.688 [2024-07-11 07:06:25.536373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.688 [2024-07-11 07:06:25.536383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.688 [2024-07-11 07:06:25.536393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.688 [2024-07-11 07:06:25.536402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.688 [2024-07-11 07:06:25.536411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.688 [2024-07-11 07:06:25.536421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.688 [2024-07-11 07:06:25.536429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.688 [2024-07-11 07:06:25.536438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.536989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.536998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.689 [2024-07-11 07:06:25.537491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537578] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19907d0 was disconnected and freed. reset controller. 00:13:41.689 [2024-07-11 07:06:25.537658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.689 [2024-07-11 07:06:25.537673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.689 [2024-07-11 07:06:25.537691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.689 [2024-07-11 07:06:25.537708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.689 [2024-07-11 07:06:25.537724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.689 [2024-07-11 07:06:25.537732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990170 is same with the state(5) to be set 00:13:41.689 [2024-07-11 07:06:25.538687] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:41.689 task offset: 34176 on job bdev=Nvme0n1 fails 00:13:41.689 00:13:41.689 Latency(us) 00:13:41.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.689 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:41.689 Job: Nvme0n1 ended in about 0.62 seconds with error 00:13:41.689 Verification LBA range: start 0x0 length 0x400 00:13:41.689 Nvme0n1 : 0.62 3695.43 230.96 103.14 0.00 16527.68 1995.87 24546.21 00:13:41.689 =================================================================================================================== 00:13:41.689 Total : 3695.43 230.96 103.14 0.00 16527.68 1995.87 24546.21 00:13:41.689 07:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.689 [2024-07-11 07:06:25.540303] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:41.689 [2024-07-11 07:06:25.540325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1990170 (9): Bad file descriptor 00:13:41.689 07:06:25 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:41.690 07:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.690 07:06:25 -- common/autotest_common.sh@10 -- # set +x 00:13:41.690 [2024-07-11 07:06:25.544641] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:41.690 [2024-07-11 07:06:25.544756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:41.690 [2024-07-11 07:06:25.544777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.690 [2024-07-11 07:06:25.544791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:41.690 [2024-07-11 07:06:25.544801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:41.690 [2024-07-11 07:06:25.544809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:41.690 [2024-07-11 07:06:25.544816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1990170 00:13:41.690 [2024-07-11 07:06:25.544850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1990170 (9): Bad file descriptor 00:13:41.690 [2024-07-11 07:06:25.544867] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:41.690 [2024-07-11 07:06:25.544875] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:41.690 [2024-07-11 07:06:25.544885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:41.690 [2024-07-11 07:06:25.544900] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:41.690 07:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.690 07:06:25 -- target/host_management.sh@87 -- # sleep 1 00:13:42.623 07:06:26 -- target/host_management.sh@91 -- # kill -9 71643 00:13:42.623 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71643) - No such process 00:13:42.623 07:06:26 -- target/host_management.sh@91 -- # true 00:13:42.623 07:06:26 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:42.623 07:06:26 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:42.623 07:06:26 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:42.623 07:06:26 -- nvmf/common.sh@520 -- # config=() 00:13:42.623 07:06:26 -- nvmf/common.sh@520 -- # local subsystem config 00:13:42.623 07:06:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:42.623 07:06:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:42.623 { 00:13:42.623 "params": { 00:13:42.623 "name": "Nvme$subsystem", 00:13:42.623 "trtype": "$TEST_TRANSPORT", 00:13:42.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.623 "adrfam": "ipv4", 00:13:42.623 "trsvcid": "$NVMF_PORT", 00:13:42.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.623 "hdgst": ${hdgst:-false}, 00:13:42.623 "ddgst": ${ddgst:-false} 00:13:42.623 }, 00:13:42.623 "method": "bdev_nvme_attach_controller" 00:13:42.623 } 00:13:42.623 EOF 00:13:42.623 )") 00:13:42.623 07:06:26 -- nvmf/common.sh@542 -- # cat 00:13:42.623 07:06:26 -- nvmf/common.sh@544 -- # jq . 00:13:42.623 07:06:26 -- nvmf/common.sh@545 -- # IFS=, 00:13:42.623 07:06:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:42.623 "params": { 00:13:42.623 "name": "Nvme0", 00:13:42.623 "trtype": "tcp", 00:13:42.623 "traddr": "10.0.0.2", 00:13:42.623 "adrfam": "ipv4", 00:13:42.623 "trsvcid": "4420", 00:13:42.623 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:42.623 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:42.623 "hdgst": false, 00:13:42.623 "ddgst": false 00:13:42.623 }, 00:13:42.623 "method": "bdev_nvme_attach_controller" 00:13:42.623 }' 00:13:42.623 [2024-07-11 07:06:26.608202] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:42.623 [2024-07-11 07:06:26.608283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71693 ] 00:13:42.882 [2024-07-11 07:06:26.739549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.882 [2024-07-11 07:06:26.825895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.141 Running I/O for 1 seconds... 00:13:44.075 00:13:44.076 Latency(us) 00:13:44.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.076 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:44.076 Verification LBA range: start 0x0 length 0x400 00:13:44.076 Nvme0n1 : 1.01 3816.29 238.52 0.00 0.00 16510.23 1079.85 21686.46 00:13:44.076 =================================================================================================================== 00:13:44.076 Total : 3816.29 238.52 0.00 0.00 16510.23 1079.85 21686.46 00:13:44.334 07:06:28 -- target/host_management.sh@101 -- # stoptarget 00:13:44.334 07:06:28 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:44.334 07:06:28 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:44.334 07:06:28 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:44.334 07:06:28 -- target/host_management.sh@40 -- # nvmftestfini 00:13:44.334 07:06:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:44.334 07:06:28 -- nvmf/common.sh@116 -- # sync 00:13:44.592 07:06:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:44.592 07:06:28 -- nvmf/common.sh@119 -- # set +e 00:13:44.592 07:06:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:44.592 07:06:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:44.592 rmmod nvme_tcp 00:13:44.592 rmmod nvme_fabrics 00:13:44.592 rmmod nvme_keyring 00:13:44.592 07:06:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:44.592 07:06:28 -- nvmf/common.sh@123 -- # set -e 00:13:44.592 07:06:28 -- nvmf/common.sh@124 -- # return 0 00:13:44.592 07:06:28 -- nvmf/common.sh@477 -- # '[' -n 71571 ']' 00:13:44.592 07:06:28 -- nvmf/common.sh@478 -- # killprocess 71571 00:13:44.592 07:06:28 -- common/autotest_common.sh@926 -- # '[' -z 71571 ']' 00:13:44.592 07:06:28 -- common/autotest_common.sh@930 -- # kill -0 71571 00:13:44.592 07:06:28 -- common/autotest_common.sh@931 -- # uname 00:13:44.592 07:06:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:44.592 07:06:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71571 00:13:44.592 07:06:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:44.592 07:06:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:44.592 killing process with pid 71571 00:13:44.592 07:06:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71571' 00:13:44.592 07:06:28 -- common/autotest_common.sh@945 -- # kill 71571 00:13:44.592 07:06:28 -- common/autotest_common.sh@950 -- # wait 71571 00:13:44.851 [2024-07-11 07:06:28.811190] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:44.851 07:06:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:44.851 07:06:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:44.851 07:06:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:44.851 07:06:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.851 07:06:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:44.851 07:06:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.851 07:06:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.851 07:06:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.851 07:06:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:44.851 00:13:44.851 real 0m5.727s 00:13:44.851 user 0m23.749s 00:13:44.851 sys 0m1.478s 00:13:44.851 07:06:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.851 07:06:28 -- common/autotest_common.sh@10 -- # set +x 00:13:44.851 ************************************ 00:13:44.851 END TEST nvmf_host_management 00:13:44.851 ************************************ 00:13:45.110 07:06:28 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:45.110 00:13:45.110 real 0m6.217s 00:13:45.110 user 0m23.853s 00:13:45.110 sys 0m1.740s 00:13:45.110 07:06:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.110 ************************************ 00:13:45.110 END TEST nvmf_host_management 00:13:45.110 07:06:28 -- common/autotest_common.sh@10 -- # set +x 00:13:45.110 ************************************ 00:13:45.110 07:06:28 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:45.110 07:06:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:45.110 07:06:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:45.110 07:06:28 -- common/autotest_common.sh@10 -- # set +x 00:13:45.110 ************************************ 00:13:45.110 START TEST nvmf_lvol 00:13:45.110 ************************************ 00:13:45.110 07:06:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:45.110 * Looking for test storage... 00:13:45.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:45.110 07:06:29 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.110 07:06:29 -- nvmf/common.sh@7 -- # uname -s 00:13:45.110 07:06:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.110 07:06:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.110 07:06:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.110 07:06:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.110 07:06:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.110 07:06:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.110 07:06:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.110 07:06:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.110 07:06:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.110 07:06:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.110 07:06:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:13:45.110 07:06:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:13:45.110 07:06:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.110 07:06:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.110 07:06:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:45.110 07:06:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.110 07:06:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.110 07:06:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.110 07:06:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.110 07:06:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.111 07:06:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.111 07:06:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.111 07:06:29 -- paths/export.sh@5 -- # export PATH 00:13:45.111 07:06:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.111 07:06:29 -- nvmf/common.sh@46 -- # : 0 00:13:45.111 07:06:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:45.111 07:06:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:45.111 07:06:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:45.111 07:06:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.111 07:06:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.111 07:06:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:45.111 07:06:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:45.111 07:06:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:45.111 07:06:29 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.111 07:06:29 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.111 07:06:29 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:45.111 07:06:29 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:45.111 07:06:29 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:45.111 07:06:29 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:45.111 07:06:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:45.111 07:06:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.111 07:06:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:45.111 07:06:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:45.111 07:06:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:45.111 07:06:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.111 07:06:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.111 07:06:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.111 07:06:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:45.111 07:06:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:45.111 07:06:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:45.111 07:06:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:45.111 07:06:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:45.111 07:06:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:45.111 07:06:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.111 07:06:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.111 07:06:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:45.111 07:06:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:45.111 07:06:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:45.111 07:06:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:45.111 07:06:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:45.111 07:06:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.111 07:06:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:45.111 07:06:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:45.111 07:06:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:45.111 07:06:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:45.111 07:06:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:45.111 07:06:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:45.111 Cannot find device "nvmf_tgt_br" 00:13:45.111 07:06:29 -- nvmf/common.sh@154 -- # true 00:13:45.111 07:06:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:45.111 Cannot find device "nvmf_tgt_br2" 00:13:45.111 07:06:29 -- nvmf/common.sh@155 -- # true 00:13:45.111 07:06:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:45.111 07:06:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:45.111 Cannot find device "nvmf_tgt_br" 00:13:45.111 07:06:29 -- nvmf/common.sh@157 -- # true 00:13:45.111 07:06:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:45.111 Cannot find device "nvmf_tgt_br2" 00:13:45.111 07:06:29 -- nvmf/common.sh@158 -- # true 00:13:45.111 07:06:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:45.371 07:06:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:45.371 07:06:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:45.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.371 07:06:29 -- nvmf/common.sh@161 -- # true 00:13:45.371 07:06:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:45.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.371 07:06:29 -- nvmf/common.sh@162 -- # true 00:13:45.371 07:06:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:45.371 07:06:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:45.371 07:06:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:45.371 07:06:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:45.371 07:06:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:45.371 07:06:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:45.371 07:06:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:45.371 07:06:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:45.371 07:06:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:45.371 07:06:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:45.371 07:06:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:45.371 07:06:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:45.371 07:06:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:45.371 07:06:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:45.371 07:06:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:45.371 07:06:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:45.371 07:06:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:45.371 07:06:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:45.371 07:06:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:45.371 07:06:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:45.371 07:06:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:45.371 07:06:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:45.371 07:06:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:45.371 07:06:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:45.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:13:45.371 00:13:45.371 --- 10.0.0.2 ping statistics --- 00:13:45.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.371 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:45.371 07:06:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:45.371 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:45.371 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:13:45.371 00:13:45.371 --- 10.0.0.3 ping statistics --- 00:13:45.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.371 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:45.371 07:06:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:45.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:45.371 00:13:45.371 --- 10.0.0.1 ping statistics --- 00:13:45.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.371 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:45.371 07:06:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.371 07:06:29 -- nvmf/common.sh@421 -- # return 0 00:13:45.371 07:06:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:45.371 07:06:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.371 07:06:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:45.371 07:06:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:45.371 07:06:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.371 07:06:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:45.371 07:06:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:45.371 07:06:29 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:45.371 07:06:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:45.371 07:06:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:45.371 07:06:29 -- common/autotest_common.sh@10 -- # set +x 00:13:45.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.371 07:06:29 -- nvmf/common.sh@469 -- # nvmfpid=71917 00:13:45.371 07:06:29 -- nvmf/common.sh@470 -- # waitforlisten 71917 00:13:45.371 07:06:29 -- common/autotest_common.sh@819 -- # '[' -z 71917 ']' 00:13:45.371 07:06:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.371 07:06:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:45.371 07:06:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:45.371 07:06:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.371 07:06:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:45.371 07:06:29 -- common/autotest_common.sh@10 -- # set +x 00:13:45.630 [2024-07-11 07:06:29.476835] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:45.630 [2024-07-11 07:06:29.476918] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.630 [2024-07-11 07:06:29.617717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.888 [2024-07-11 07:06:29.730364] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:45.888 [2024-07-11 07:06:29.730903] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.888 [2024-07-11 07:06:29.731121] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.888 [2024-07-11 07:06:29.731300] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.888 [2024-07-11 07:06:29.731557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.888 [2024-07-11 07:06:29.731654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.888 [2024-07-11 07:06:29.731660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.454 07:06:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:46.454 07:06:30 -- common/autotest_common.sh@852 -- # return 0 00:13:46.454 07:06:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:46.454 07:06:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:46.454 07:06:30 -- common/autotest_common.sh@10 -- # set +x 00:13:46.454 07:06:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.454 07:06:30 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:46.713 [2024-07-11 07:06:30.730374] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.713 07:06:30 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:47.280 07:06:31 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:47.280 07:06:31 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:47.280 07:06:31 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:47.280 07:06:31 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:47.538 07:06:31 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:47.797 07:06:31 -- target/nvmf_lvol.sh@29 -- # lvs=895263f1-24eb-4664-a2fa-b393c34a1521 00:13:47.797 07:06:31 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 895263f1-24eb-4664-a2fa-b393c34a1521 lvol 20 00:13:48.056 07:06:32 -- target/nvmf_lvol.sh@32 -- # lvol=b1adb7a6-8fba-43fe-b597-225ec1bb1b80 00:13:48.056 07:06:32 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:48.315 07:06:32 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b1adb7a6-8fba-43fe-b597-225ec1bb1b80 00:13:48.573 07:06:32 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:48.832 [2024-07-11 07:06:32.667879] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.832 07:06:32 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:49.090 07:06:32 -- target/nvmf_lvol.sh@42 -- # perf_pid=72059 00:13:49.090 07:06:32 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:49.090 07:06:32 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:50.023 07:06:33 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot b1adb7a6-8fba-43fe-b597-225ec1bb1b80 MY_SNAPSHOT 00:13:50.280 07:06:34 -- target/nvmf_lvol.sh@47 -- # snapshot=1b21eb41-b323-453c-87c9-7a228fc5a787 00:13:50.280 07:06:34 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize b1adb7a6-8fba-43fe-b597-225ec1bb1b80 30 00:13:50.539 07:06:34 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 1b21eb41-b323-453c-87c9-7a228fc5a787 MY_CLONE 00:13:50.798 07:06:34 -- target/nvmf_lvol.sh@49 -- # clone=f9564156-daff-44cc-a11e-7d261b9484b3 00:13:50.798 07:06:34 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate f9564156-daff-44cc-a11e-7d261b9484b3 00:13:51.365 07:06:35 -- target/nvmf_lvol.sh@53 -- # wait 72059 00:13:59.483 Initializing NVMe Controllers 00:13:59.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:59.483 Controller IO queue size 128, less than required. 00:13:59.483 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:59.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:59.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:59.483 Initialization complete. Launching workers. 00:13:59.483 ======================================================== 00:13:59.483 Latency(us) 00:13:59.483 Device Information : IOPS MiB/s Average min max 00:13:59.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12643.70 49.39 10124.51 2527.66 59500.47 00:13:59.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12504.40 48.85 10241.39 2332.83 59821.27 00:13:59.483 ======================================================== 00:13:59.483 Total : 25148.10 98.23 10182.63 2332.83 59821.27 00:13:59.483 00:13:59.483 07:06:43 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:59.740 07:06:43 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b1adb7a6-8fba-43fe-b597-225ec1bb1b80 00:13:59.997 07:06:43 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 895263f1-24eb-4664-a2fa-b393c34a1521 00:13:59.997 07:06:44 -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:59.997 07:06:44 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:59.997 07:06:44 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:59.997 07:06:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:59.997 07:06:44 -- nvmf/common.sh@116 -- # sync 00:14:00.263 07:06:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:00.263 07:06:44 -- nvmf/common.sh@119 -- # set +e 00:14:00.263 07:06:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:00.263 07:06:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:00.263 rmmod nvme_tcp 00:14:00.263 rmmod nvme_fabrics 00:14:00.263 rmmod nvme_keyring 00:14:00.263 07:06:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:00.263 07:06:44 -- nvmf/common.sh@123 -- # set -e 00:14:00.263 07:06:44 -- nvmf/common.sh@124 -- # return 0 00:14:00.263 07:06:44 -- nvmf/common.sh@477 -- # '[' -n 71917 ']' 00:14:00.263 07:06:44 -- nvmf/common.sh@478 -- # killprocess 71917 00:14:00.263 07:06:44 -- common/autotest_common.sh@926 -- # '[' -z 71917 ']' 00:14:00.263 07:06:44 -- common/autotest_common.sh@930 -- # kill -0 71917 00:14:00.263 07:06:44 -- common/autotest_common.sh@931 -- # uname 00:14:00.263 07:06:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:00.263 07:06:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71917 00:14:00.263 07:06:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:00.263 07:06:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:00.263 killing process with pid 71917 00:14:00.263 07:06:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71917' 00:14:00.263 07:06:44 -- common/autotest_common.sh@945 -- # kill 71917 00:14:00.264 07:06:44 -- common/autotest_common.sh@950 -- # wait 71917 00:14:00.564 07:06:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:00.564 07:06:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:00.564 07:06:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:00.564 07:06:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.564 07:06:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:00.564 07:06:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.564 07:06:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.564 07:06:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.564 07:06:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:00.564 ************************************ 00:14:00.564 END TEST nvmf_lvol 00:14:00.564 ************************************ 00:14:00.564 00:14:00.564 real 0m15.537s 00:14:00.564 user 1m4.361s 00:14:00.564 sys 0m4.250s 00:14:00.564 07:06:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.564 07:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:00.564 07:06:44 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:00.564 07:06:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:00.564 07:06:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:00.564 07:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:00.564 ************************************ 00:14:00.564 START TEST nvmf_lvs_grow 00:14:00.564 ************************************ 00:14:00.564 07:06:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:00.837 * Looking for test storage... 00:14:00.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:00.837 07:06:44 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:00.837 07:06:44 -- nvmf/common.sh@7 -- # uname -s 00:14:00.837 07:06:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.837 07:06:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.837 07:06:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.837 07:06:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.837 07:06:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.837 07:06:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.837 07:06:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.837 07:06:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.837 07:06:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.837 07:06:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.837 07:06:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:14:00.837 07:06:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:14:00.837 07:06:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.837 07:06:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.837 07:06:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:00.837 07:06:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.837 07:06:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.837 07:06:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.837 07:06:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.837 07:06:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.837 07:06:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.837 07:06:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.837 07:06:44 -- paths/export.sh@5 -- # export PATH 00:14:00.837 07:06:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.837 07:06:44 -- nvmf/common.sh@46 -- # : 0 00:14:00.837 07:06:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:00.837 07:06:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:00.837 07:06:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:00.837 07:06:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.837 07:06:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.837 07:06:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:00.837 07:06:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:00.837 07:06:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:00.837 07:06:44 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.837 07:06:44 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:00.837 07:06:44 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:00.837 07:06:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:00.837 07:06:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.837 07:06:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:00.837 07:06:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:00.837 07:06:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:00.837 07:06:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.837 07:06:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.837 07:06:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.837 07:06:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:00.837 07:06:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:00.837 07:06:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:00.837 07:06:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:00.837 07:06:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:00.837 07:06:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:00.837 07:06:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.837 07:06:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.837 07:06:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:00.837 07:06:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:00.837 07:06:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:00.837 07:06:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:00.837 07:06:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:00.837 07:06:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.837 07:06:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:00.837 07:06:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:00.837 07:06:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:00.837 07:06:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:00.837 07:06:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:00.837 07:06:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:00.837 Cannot find device "nvmf_tgt_br" 00:14:00.837 07:06:44 -- nvmf/common.sh@154 -- # true 00:14:00.837 07:06:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.837 Cannot find device "nvmf_tgt_br2" 00:14:00.837 07:06:44 -- nvmf/common.sh@155 -- # true 00:14:00.837 07:06:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:00.837 07:06:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:00.837 Cannot find device "nvmf_tgt_br" 00:14:00.837 07:06:44 -- nvmf/common.sh@157 -- # true 00:14:00.837 07:06:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:00.837 Cannot find device "nvmf_tgt_br2" 00:14:00.837 07:06:44 -- nvmf/common.sh@158 -- # true 00:14:00.837 07:06:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:00.837 07:06:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:00.837 07:06:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.838 07:06:44 -- nvmf/common.sh@161 -- # true 00:14:00.838 07:06:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.838 07:06:44 -- nvmf/common.sh@162 -- # true 00:14:00.838 07:06:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.838 07:06:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.838 07:06:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.838 07:06:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.838 07:06:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.838 07:06:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.838 07:06:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.838 07:06:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:00.838 07:06:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:00.838 07:06:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:00.838 07:06:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:00.838 07:06:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:00.838 07:06:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:00.838 07:06:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:00.838 07:06:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:00.838 07:06:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:00.838 07:06:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:00.838 07:06:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:00.838 07:06:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:01.096 07:06:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:01.096 07:06:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:01.096 07:06:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:01.096 07:06:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:01.096 07:06:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:01.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:14:01.096 00:14:01.096 --- 10.0.0.2 ping statistics --- 00:14:01.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.096 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:01.096 07:06:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:01.096 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:01.096 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:14:01.096 00:14:01.096 --- 10.0.0.3 ping statistics --- 00:14:01.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.096 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:01.096 07:06:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:01.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:14:01.096 00:14:01.096 --- 10.0.0.1 ping statistics --- 00:14:01.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.096 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:01.096 07:06:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.096 07:06:44 -- nvmf/common.sh@421 -- # return 0 00:14:01.096 07:06:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:01.096 07:06:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.096 07:06:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:01.096 07:06:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:01.096 07:06:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.096 07:06:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:01.096 07:06:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:01.096 07:06:44 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:01.096 07:06:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:01.096 07:06:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:01.096 07:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:01.096 07:06:44 -- nvmf/common.sh@469 -- # nvmfpid=72432 00:14:01.096 07:06:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:01.096 07:06:44 -- nvmf/common.sh@470 -- # waitforlisten 72432 00:14:01.096 07:06:44 -- common/autotest_common.sh@819 -- # '[' -z 72432 ']' 00:14:01.096 07:06:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.096 07:06:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:01.096 07:06:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.096 07:06:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:01.096 07:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:01.096 [2024-07-11 07:06:45.037557] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:01.096 [2024-07-11 07:06:45.037656] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.354 [2024-07-11 07:06:45.175586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.354 [2024-07-11 07:06:45.256326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:01.354 [2024-07-11 07:06:45.256479] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.354 [2024-07-11 07:06:45.256492] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.354 [2024-07-11 07:06:45.256500] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.354 [2024-07-11 07:06:45.256532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.920 07:06:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:01.920 07:06:45 -- common/autotest_common.sh@852 -- # return 0 00:14:01.920 07:06:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:01.920 07:06:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:01.920 07:06:45 -- common/autotest_common.sh@10 -- # set +x 00:14:02.178 07:06:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.178 07:06:46 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:02.179 [2024-07-11 07:06:46.184176] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.179 07:06:46 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:02.179 07:06:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:02.179 07:06:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.179 07:06:46 -- common/autotest_common.sh@10 -- # set +x 00:14:02.179 ************************************ 00:14:02.179 START TEST lvs_grow_clean 00:14:02.179 ************************************ 00:14:02.179 07:06:46 -- common/autotest_common.sh@1104 -- # lvs_grow 00:14:02.179 07:06:46 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:02.179 07:06:46 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:02.179 07:06:46 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:02.179 07:06:46 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:02.179 07:06:46 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:02.179 07:06:46 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:02.179 07:06:46 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:02.179 07:06:46 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:02.179 07:06:46 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:02.437 07:06:46 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:02.437 07:06:46 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:02.695 07:06:46 -- target/nvmf_lvs_grow.sh@28 -- # lvs=21c3da0c-ea16-4508-a078-3c89915881ca 00:14:02.695 07:06:46 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21c3da0c-ea16-4508-a078-3c89915881ca 00:14:02.695 07:06:46 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:02.953 07:06:46 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:02.953 07:06:46 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:02.953 07:06:46 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 21c3da0c-ea16-4508-a078-3c89915881ca lvol 150 00:14:03.211 07:06:47 -- target/nvmf_lvs_grow.sh@33 -- # lvol=eaa7cd3f-36b3-40a3-a91e-c0f84c6722da 00:14:03.211 07:06:47 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:03.211 07:06:47 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:03.469 [2024-07-11 07:06:47.366269] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:03.469 [2024-07-11 07:06:47.366321] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:03.469 true 00:14:03.469 07:06:47 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21c3da0c-ea16-4508-a078-3c89915881ca 00:14:03.469 07:06:47 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:03.727 07:06:47 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:03.727 07:06:47 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:03.986 07:06:47 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eaa7cd3f-36b3-40a3-a91e-c0f84c6722da 00:14:03.986 07:06:48 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:04.244 [2024-07-11 07:06:48.250764] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.244 07:06:48 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:04.502 07:06:48 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72588 00:14:04.502 07:06:48 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:04.502 07:06:48 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:04.502 07:06:48 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72588 /var/tmp/bdevperf.sock 00:14:04.502 07:06:48 -- common/autotest_common.sh@819 -- # '[' -z 72588 ']' 00:14:04.502 07:06:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:04.502 07:06:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:04.502 07:06:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:04.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:04.502 07:06:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:04.502 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:14:04.760 [2024-07-11 07:06:48.573788] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:04.760 [2024-07-11 07:06:48.573882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72588 ] 00:14:04.760 [2024-07-11 07:06:48.715023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.018 [2024-07-11 07:06:48.830821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.583 07:06:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:05.583 07:06:49 -- common/autotest_common.sh@852 -- # return 0 00:14:05.584 07:06:49 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:05.584 Nvme0n1 00:14:05.584 07:06:49 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:05.841 [ 00:14:05.841 { 00:14:05.841 "aliases": [ 00:14:05.841 "eaa7cd3f-36b3-40a3-a91e-c0f84c6722da" 00:14:05.841 ], 00:14:05.841 "assigned_rate_limits": { 00:14:05.841 "r_mbytes_per_sec": 0, 00:14:05.841 "rw_ios_per_sec": 0, 00:14:05.841 "rw_mbytes_per_sec": 0, 00:14:05.841 "w_mbytes_per_sec": 0 00:14:05.841 }, 00:14:05.841 "block_size": 4096, 00:14:05.841 "claimed": false, 00:14:05.841 "driver_specific": { 00:14:05.841 "mp_policy": "active_passive", 00:14:05.841 "nvme": [ 00:14:05.841 { 00:14:05.841 "ctrlr_data": { 00:14:05.841 "ana_reporting": false, 00:14:05.841 "cntlid": 1, 00:14:05.841 "firmware_revision": "24.01.1", 00:14:05.841 "model_number": "SPDK bdev Controller", 00:14:05.841 "multi_ctrlr": true, 00:14:05.841 "oacs": { 00:14:05.841 "firmware": 0, 00:14:05.841 "format": 0, 00:14:05.841 "ns_manage": 0, 00:14:05.841 "security": 0 00:14:05.841 }, 00:14:05.841 "serial_number": "SPDK0", 00:14:05.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:05.841 "vendor_id": "0x8086" 00:14:05.841 }, 00:14:05.841 "ns_data": { 00:14:05.841 "can_share": true, 00:14:05.841 "id": 1 00:14:05.841 }, 00:14:05.841 "trid": { 00:14:05.841 "adrfam": "IPv4", 00:14:05.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:05.841 "traddr": "10.0.0.2", 00:14:05.841 "trsvcid": "4420", 00:14:05.841 "trtype": "TCP" 00:14:05.841 }, 00:14:05.841 "vs": { 00:14:05.841 "nvme_version": "1.3" 00:14:05.841 } 00:14:05.841 } 00:14:05.841 ] 00:14:05.841 }, 00:14:05.841 "name": "Nvme0n1", 00:14:05.841 "num_blocks": 38912, 00:14:05.841 "product_name": "NVMe disk", 00:14:05.841 "supported_io_types": { 00:14:05.841 "abort": true, 00:14:05.841 "compare": true, 00:14:05.841 "compare_and_write": true, 00:14:05.841 "flush": true, 00:14:05.841 "nvme_admin": true, 00:14:05.841 "nvme_io": true, 00:14:05.841 "read": true, 00:14:05.841 "reset": true, 00:14:05.841 "unmap": true, 00:14:05.841 "write": true, 00:14:05.841 "write_zeroes": true 00:14:05.841 }, 00:14:05.841 "uuid": "eaa7cd3f-36b3-40a3-a91e-c0f84c6722da", 00:14:05.841 "zoned": false 00:14:05.841 } 00:14:05.841 ] 00:14:05.841 07:06:49 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72641 00:14:05.841 07:06:49 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:05.841 07:06:49 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:06.099 Running I/O for 10 seconds... 00:14:07.033 Latency(us) 00:14:07.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.033 Nvme0n1 : 1.00 10386.00 40.57 0.00 0.00 0.00 0.00 0.00 00:14:07.033 =================================================================================================================== 00:14:07.033 Total : 10386.00 40.57 0.00 0.00 0.00 0.00 0.00 00:14:07.033 00:14:07.966 07:06:51 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 21c3da0c-ea16-4508-a078-3c89915881ca 00:14:07.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.966 Nvme0n1 : 2.00 9130.00 35.66 0.00 0.00 0.00 0.00 0.00 00:14:07.966 =================================================================================================================== 00:14:07.966 Total : 9130.00 35.66 0.00 0.00 0.00 0.00 0.00 00:14:07.966 00:14:08.225 true 00:14:08.225 07:06:52 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21c3da0c-ea16-4508-a078-3c89915881ca 00:14:08.225 07:06:52 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:08.484 07:06:52 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:08.484 07:06:52 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:08.484 07:06:52 -- target/nvmf_lvs_grow.sh@65 -- # wait 72641 00:14:09.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.051 Nvme0n1 : 3.00 8498.33 33.20 0.00 0.00 0.00 0.00 0.00 00:14:09.051 =================================================================================================================== 00:14:09.051 Total : 8498.33 33.20 0.00 0.00 0.00 0.00 0.00 00:14:09.051 00:14:09.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.985 Nvme0n1 : 4.00 8199.00 32.03 0.00 0.00 0.00 0.00 0.00 00:14:09.985 =================================================================================================================== 00:14:09.985 Total : 8199.00 32.03 0.00 0.00 0.00 0.00 0.00 00:14:09.985 00:14:10.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.920 Nvme0n1 : 5.00 8008.20 31.28 0.00 0.00 0.00 0.00 0.00 00:14:10.920 =================================================================================================================== 00:14:10.920 Total : 8008.20 31.28 0.00 0.00 0.00 0.00 0.00 00:14:10.920 00:14:12.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.296 Nvme0n1 : 6.00 7876.67 30.77 0.00 0.00 0.00 0.00 0.00 00:14:12.296 =================================================================================================================== 00:14:12.296 Total : 7876.67 30.77 0.00 0.00 0.00 0.00 0.00 00:14:12.296 00:14:13.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.233 Nvme0n1 : 7.00 7790.14 30.43 0.00 0.00 0.00 0.00 0.00 00:14:13.233 =================================================================================================================== 00:14:13.233 Total : 7790.14 30.43 0.00 0.00 0.00 0.00 0.00 00:14:13.233 00:14:14.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.170 Nvme0n1 : 8.00 7727.62 30.19 0.00 0.00 0.00 0.00 0.00 00:14:14.170 =================================================================================================================== 00:14:14.170 Total : 7727.62 30.19 0.00 0.00 0.00 0.00 0.00 00:14:14.170 00:14:15.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.106 Nvme0n1 : 9.00 7674.89 29.98 0.00 0.00 0.00 0.00 0.00 00:14:15.106 =================================================================================================================== 00:14:15.106 Total : 7674.89 29.98 0.00 0.00 0.00 0.00 0.00 00:14:15.106 00:14:16.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.043 Nvme0n1 : 10.00 7641.60 29.85 0.00 0.00 0.00 0.00 0.00 00:14:16.043 =================================================================================================================== 00:14:16.043 Total : 7641.60 29.85 0.00 0.00 0.00 0.00 0.00 00:14:16.043 00:14:16.043 00:14:16.043 Latency(us) 00:14:16.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.043 Nvme0n1 : 10.01 7647.02 29.87 0.00 0.00 16729.78 6076.97 40274.85 00:14:16.043 =================================================================================================================== 00:14:16.043 Total : 7647.02 29.87 0.00 0.00 16729.78 6076.97 40274.85 00:14:16.043 0 00:14:16.043 07:06:59 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72588 00:14:16.043 07:06:59 -- common/autotest_common.sh@926 -- # '[' -z 72588 ']' 00:14:16.043 07:06:59 -- common/autotest_common.sh@930 -- # kill -0 72588 00:14:16.043 07:06:59 -- common/autotest_common.sh@931 -- # uname 00:14:16.043 07:06:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:16.043 07:06:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72588 00:14:16.043 07:07:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:16.043 07:07:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:16.043 killing process with pid 72588 00:14:16.043 07:07:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72588' 00:14:16.043 Received shutdown signal, test time was about 10.000000 seconds 00:14:16.043 00:14:16.043 Latency(us) 00:14:16.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.043 =================================================================================================================== 00:14:16.043 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.043 07:07:00 -- common/autotest_common.sh@945 -- # kill 72588 00:14:16.043 07:07:00 -- common/autotest_common.sh@950 -- # wait 72588 00:14:16.301 07:07:00 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:16.560 07:07:00 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21c3da0c-ea16-4508-a078-3c89915881ca 00:14:16.560 07:07:00 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:16.819 07:07:00 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:16.819 07:07:00 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:16.819 07:07:00 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:17.078 [2024-07-11 07:07:01.057059] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:17.078 07:07:01 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21c3da0c-ea16-4508-a078-3c89915881ca 00:14:17.078 07:07:01 -- common/autotest_common.sh@640 -- # local es=0 00:14:17.078 07:07:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21c3da0c-ea16-4508-a078-3c89915881ca 00:14:17.078 07:07:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.078 07:07:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:17.078 07:07:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.078 07:07:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:17.078 07:07:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.078 07:07:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:17.078 07:07:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.078 07:07:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:17.078 07:07:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21c3da0c-ea16-4508-a078-3c89915881ca 00:14:17.337 2024/07/11 07:07:01 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:21c3da0c-ea16-4508-a078-3c89915881ca], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:17.337 request: 00:14:17.337 { 00:14:17.337 "method": "bdev_lvol_get_lvstores", 00:14:17.337 "params": { 00:14:17.337 "uuid": "21c3da0c-ea16-4508-a078-3c89915881ca" 00:14:17.337 } 00:14:17.337 } 00:14:17.337 Got JSON-RPC error response 00:14:17.337 GoRPCClient: error on JSON-RPC call 00:14:17.337 07:07:01 -- common/autotest_common.sh@643 -- # es=1 00:14:17.337 07:07:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:17.337 07:07:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:17.337 07:07:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:17.337 07:07:01 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:17.596 aio_bdev 00:14:17.596 07:07:01 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev eaa7cd3f-36b3-40a3-a91e-c0f84c6722da 00:14:17.596 07:07:01 -- common/autotest_common.sh@887 -- # local bdev_name=eaa7cd3f-36b3-40a3-a91e-c0f84c6722da 00:14:17.596 07:07:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:17.596 07:07:01 -- common/autotest_common.sh@889 -- # local i 00:14:17.596 07:07:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:17.596 07:07:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:17.596 07:07:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:17.854 07:07:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eaa7cd3f-36b3-40a3-a91e-c0f84c6722da -t 2000 00:14:17.854 [ 00:14:17.854 { 00:14:17.854 "aliases": [ 00:14:17.854 "lvs/lvol" 00:14:17.854 ], 00:14:17.854 "assigned_rate_limits": { 00:14:17.854 "r_mbytes_per_sec": 0, 00:14:17.854 "rw_ios_per_sec": 0, 00:14:17.854 "rw_mbytes_per_sec": 0, 00:14:17.854 "w_mbytes_per_sec": 0 00:14:17.854 }, 00:14:17.854 "block_size": 4096, 00:14:17.854 "claimed": false, 00:14:17.854 "driver_specific": { 00:14:17.854 "lvol": { 00:14:17.854 "base_bdev": "aio_bdev", 00:14:17.854 "clone": false, 00:14:17.854 "esnap_clone": false, 00:14:17.854 "lvol_store_uuid": "21c3da0c-ea16-4508-a078-3c89915881ca", 00:14:17.854 "snapshot": false, 00:14:17.854 "thin_provision": false 00:14:17.854 } 00:14:17.854 }, 00:14:17.854 "name": "eaa7cd3f-36b3-40a3-a91e-c0f84c6722da", 00:14:17.854 "num_blocks": 38912, 00:14:17.854 "product_name": "Logical Volume", 00:14:17.854 "supported_io_types": { 00:14:17.854 "abort": false, 00:14:17.854 "compare": false, 00:14:17.854 "compare_and_write": false, 00:14:17.854 "flush": false, 00:14:17.854 "nvme_admin": false, 00:14:17.854 "nvme_io": false, 00:14:17.854 "read": true, 00:14:17.854 "reset": true, 00:14:17.854 "unmap": true, 00:14:17.854 "write": true, 00:14:17.854 "write_zeroes": true 00:14:17.854 }, 00:14:17.854 "uuid": "eaa7cd3f-36b3-40a3-a91e-c0f84c6722da", 00:14:17.854 "zoned": false 00:14:17.854 } 00:14:17.854 ] 00:14:18.113 07:07:01 -- common/autotest_common.sh@895 -- # return 0 00:14:18.113 07:07:01 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:18.113 07:07:01 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21c3da0c-ea16-4508-a078-3c89915881ca 00:14:18.113 07:07:02 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:18.113 07:07:02 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21c3da0c-ea16-4508-a078-3c89915881ca 00:14:18.113 07:07:02 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:18.378 07:07:02 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:18.378 07:07:02 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete eaa7cd3f-36b3-40a3-a91e-c0f84c6722da 00:14:18.637 07:07:02 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21c3da0c-ea16-4508-a078-3c89915881ca 00:14:18.895 07:07:02 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:19.154 07:07:03 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:19.412 ************************************ 00:14:19.412 END TEST lvs_grow_clean 00:14:19.412 ************************************ 00:14:19.412 00:14:19.412 real 0m17.234s 00:14:19.412 user 0m16.473s 00:14:19.412 sys 0m2.057s 00:14:19.412 07:07:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.412 07:07:03 -- common/autotest_common.sh@10 -- # set +x 00:14:19.669 07:07:03 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:19.669 07:07:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:19.669 07:07:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:19.669 07:07:03 -- common/autotest_common.sh@10 -- # set +x 00:14:19.669 ************************************ 00:14:19.669 START TEST lvs_grow_dirty 00:14:19.669 ************************************ 00:14:19.669 07:07:03 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:19.669 07:07:03 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:19.669 07:07:03 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:19.669 07:07:03 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:19.669 07:07:03 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:19.669 07:07:03 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:19.669 07:07:03 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:19.669 07:07:03 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:19.669 07:07:03 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:19.669 07:07:03 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:19.926 07:07:03 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:19.926 07:07:03 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:20.184 07:07:04 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:20.184 07:07:04 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:20.184 07:07:04 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:20.184 07:07:04 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:20.184 07:07:04 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:20.184 07:07:04 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b lvol 150 00:14:20.442 07:07:04 -- target/nvmf_lvs_grow.sh@33 -- # lvol=622e3b31-cc45-4ed1-90dd-d490b37deb9b 00:14:20.442 07:07:04 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:20.442 07:07:04 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:20.700 [2024-07-11 07:07:04.666428] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:20.700 [2024-07-11 07:07:04.666506] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:20.700 true 00:14:20.700 07:07:04 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:20.700 07:07:04 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:20.959 07:07:04 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:20.959 07:07:04 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:21.217 07:07:05 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 622e3b31-cc45-4ed1-90dd-d490b37deb9b 00:14:21.475 07:07:05 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:21.475 07:07:05 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.733 07:07:05 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73011 00:14:21.734 07:07:05 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:21.734 07:07:05 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:21.734 07:07:05 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73011 /var/tmp/bdevperf.sock 00:14:21.734 07:07:05 -- common/autotest_common.sh@819 -- # '[' -z 73011 ']' 00:14:21.734 07:07:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:21.734 07:07:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.734 07:07:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:21.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:21.734 07:07:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.734 07:07:05 -- common/autotest_common.sh@10 -- # set +x 00:14:21.992 [2024-07-11 07:07:05.796666] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:21.992 [2024-07-11 07:07:05.796754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73011 ] 00:14:21.992 [2024-07-11 07:07:05.932413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.992 [2024-07-11 07:07:06.018730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.995 07:07:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:22.995 07:07:06 -- common/autotest_common.sh@852 -- # return 0 00:14:22.995 07:07:06 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:22.995 Nvme0n1 00:14:22.995 07:07:06 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:23.276 [ 00:14:23.276 { 00:14:23.276 "aliases": [ 00:14:23.276 "622e3b31-cc45-4ed1-90dd-d490b37deb9b" 00:14:23.276 ], 00:14:23.276 "assigned_rate_limits": { 00:14:23.276 "r_mbytes_per_sec": 0, 00:14:23.276 "rw_ios_per_sec": 0, 00:14:23.276 "rw_mbytes_per_sec": 0, 00:14:23.276 "w_mbytes_per_sec": 0 00:14:23.276 }, 00:14:23.276 "block_size": 4096, 00:14:23.276 "claimed": false, 00:14:23.276 "driver_specific": { 00:14:23.276 "mp_policy": "active_passive", 00:14:23.276 "nvme": [ 00:14:23.276 { 00:14:23.276 "ctrlr_data": { 00:14:23.276 "ana_reporting": false, 00:14:23.276 "cntlid": 1, 00:14:23.276 "firmware_revision": "24.01.1", 00:14:23.276 "model_number": "SPDK bdev Controller", 00:14:23.276 "multi_ctrlr": true, 00:14:23.276 "oacs": { 00:14:23.276 "firmware": 0, 00:14:23.276 "format": 0, 00:14:23.276 "ns_manage": 0, 00:14:23.276 "security": 0 00:14:23.276 }, 00:14:23.276 "serial_number": "SPDK0", 00:14:23.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:23.276 "vendor_id": "0x8086" 00:14:23.276 }, 00:14:23.276 "ns_data": { 00:14:23.276 "can_share": true, 00:14:23.276 "id": 1 00:14:23.276 }, 00:14:23.276 "trid": { 00:14:23.277 "adrfam": "IPv4", 00:14:23.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:23.277 "traddr": "10.0.0.2", 00:14:23.277 "trsvcid": "4420", 00:14:23.277 "trtype": "TCP" 00:14:23.277 }, 00:14:23.277 "vs": { 00:14:23.277 "nvme_version": "1.3" 00:14:23.277 } 00:14:23.277 } 00:14:23.277 ] 00:14:23.277 }, 00:14:23.277 "name": "Nvme0n1", 00:14:23.277 "num_blocks": 38912, 00:14:23.277 "product_name": "NVMe disk", 00:14:23.277 "supported_io_types": { 00:14:23.277 "abort": true, 00:14:23.277 "compare": true, 00:14:23.277 "compare_and_write": true, 00:14:23.277 "flush": true, 00:14:23.277 "nvme_admin": true, 00:14:23.277 "nvme_io": true, 00:14:23.277 "read": true, 00:14:23.277 "reset": true, 00:14:23.277 "unmap": true, 00:14:23.277 "write": true, 00:14:23.277 "write_zeroes": true 00:14:23.277 }, 00:14:23.277 "uuid": "622e3b31-cc45-4ed1-90dd-d490b37deb9b", 00:14:23.277 "zoned": false 00:14:23.277 } 00:14:23.277 ] 00:14:23.277 07:07:07 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73063 00:14:23.277 07:07:07 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:23.277 07:07:07 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:23.277 Running I/O for 10 seconds... 00:14:24.210 Latency(us) 00:14:24.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.211 Nvme0n1 : 1.00 9417.00 36.79 0.00 0.00 0.00 0.00 0.00 00:14:24.211 =================================================================================================================== 00:14:24.211 Total : 9417.00 36.79 0.00 0.00 0.00 0.00 0.00 00:14:24.211 00:14:25.144 07:07:09 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:25.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.402 Nvme0n1 : 2.00 9467.50 36.98 0.00 0.00 0.00 0.00 0.00 00:14:25.402 =================================================================================================================== 00:14:25.402 Total : 9467.50 36.98 0.00 0.00 0.00 0.00 0.00 00:14:25.402 00:14:25.402 true 00:14:25.660 07:07:09 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:25.660 07:07:09 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:25.918 07:07:09 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:25.918 07:07:09 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:25.918 07:07:09 -- target/nvmf_lvs_grow.sh@65 -- # wait 73063 00:14:26.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.484 Nvme0n1 : 3.00 9389.67 36.68 0.00 0.00 0.00 0.00 0.00 00:14:26.484 =================================================================================================================== 00:14:26.484 Total : 9389.67 36.68 0.00 0.00 0.00 0.00 0.00 00:14:26.484 00:14:27.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.417 Nvme0n1 : 4.00 9354.50 36.54 0.00 0.00 0.00 0.00 0.00 00:14:27.417 =================================================================================================================== 00:14:27.417 Total : 9354.50 36.54 0.00 0.00 0.00 0.00 0.00 00:14:27.417 00:14:28.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.349 Nvme0n1 : 5.00 9345.00 36.50 0.00 0.00 0.00 0.00 0.00 00:14:28.349 =================================================================================================================== 00:14:28.349 Total : 9345.00 36.50 0.00 0.00 0.00 0.00 0.00 00:14:28.349 00:14:29.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.282 Nvme0n1 : 6.00 9327.17 36.43 0.00 0.00 0.00 0.00 0.00 00:14:29.282 =================================================================================================================== 00:14:29.282 Total : 9327.17 36.43 0.00 0.00 0.00 0.00 0.00 00:14:29.282 00:14:30.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.215 Nvme0n1 : 7.00 9242.71 36.10 0.00 0.00 0.00 0.00 0.00 00:14:30.215 =================================================================================================================== 00:14:30.215 Total : 9242.71 36.10 0.00 0.00 0.00 0.00 0.00 00:14:30.215 00:14:31.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.591 Nvme0n1 : 8.00 8987.75 35.11 0.00 0.00 0.00 0.00 0.00 00:14:31.591 =================================================================================================================== 00:14:31.591 Total : 8987.75 35.11 0.00 0.00 0.00 0.00 0.00 00:14:31.591 00:14:32.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.527 Nvme0n1 : 9.00 8676.00 33.89 0.00 0.00 0.00 0.00 0.00 00:14:32.527 =================================================================================================================== 00:14:32.527 Total : 8676.00 33.89 0.00 0.00 0.00 0.00 0.00 00:14:32.527 00:14:33.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.463 Nvme0n1 : 10.00 8507.10 33.23 0.00 0.00 0.00 0.00 0.00 00:14:33.463 =================================================================================================================== 00:14:33.463 Total : 8507.10 33.23 0.00 0.00 0.00 0.00 0.00 00:14:33.463 00:14:33.463 00:14:33.463 Latency(us) 00:14:33.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.463 Nvme0n1 : 10.01 8507.90 33.23 0.00 0.00 15033.80 6196.13 159192.90 00:14:33.463 =================================================================================================================== 00:14:33.463 Total : 8507.90 33.23 0.00 0.00 15033.80 6196.13 159192.90 00:14:33.463 0 00:14:33.463 07:07:17 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73011 00:14:33.463 07:07:17 -- common/autotest_common.sh@926 -- # '[' -z 73011 ']' 00:14:33.463 07:07:17 -- common/autotest_common.sh@930 -- # kill -0 73011 00:14:33.463 07:07:17 -- common/autotest_common.sh@931 -- # uname 00:14:33.463 07:07:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:33.463 07:07:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73011 00:14:33.463 killing process with pid 73011 00:14:33.463 Received shutdown signal, test time was about 10.000000 seconds 00:14:33.463 00:14:33.463 Latency(us) 00:14:33.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.463 =================================================================================================================== 00:14:33.463 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:33.463 07:07:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:33.463 07:07:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:33.463 07:07:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73011' 00:14:33.463 07:07:17 -- common/autotest_common.sh@945 -- # kill 73011 00:14:33.463 07:07:17 -- common/autotest_common.sh@950 -- # wait 73011 00:14:33.722 07:07:17 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:33.980 07:07:17 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:33.981 07:07:17 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:34.239 07:07:18 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:34.239 07:07:18 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:34.239 07:07:18 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72432 00:14:34.239 07:07:18 -- target/nvmf_lvs_grow.sh@74 -- # wait 72432 00:14:34.239 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72432 Killed "${NVMF_APP[@]}" "$@" 00:14:34.239 07:07:18 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:34.239 07:07:18 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:34.239 07:07:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:34.239 07:07:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:34.239 07:07:18 -- common/autotest_common.sh@10 -- # set +x 00:14:34.239 07:07:18 -- nvmf/common.sh@469 -- # nvmfpid=73209 00:14:34.239 07:07:18 -- nvmf/common.sh@470 -- # waitforlisten 73209 00:14:34.239 07:07:18 -- common/autotest_common.sh@819 -- # '[' -z 73209 ']' 00:14:34.239 07:07:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.239 07:07:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:34.239 07:07:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:34.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.239 07:07:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.239 07:07:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:34.239 07:07:18 -- common/autotest_common.sh@10 -- # set +x 00:14:34.239 [2024-07-11 07:07:18.207670] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:34.239 [2024-07-11 07:07:18.207764] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.498 [2024-07-11 07:07:18.347150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.498 [2024-07-11 07:07:18.430928] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:34.498 [2024-07-11 07:07:18.431046] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.498 [2024-07-11 07:07:18.431058] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.498 [2024-07-11 07:07:18.431066] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.498 [2024-07-11 07:07:18.431092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.433 07:07:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:35.433 07:07:19 -- common/autotest_common.sh@852 -- # return 0 00:14:35.433 07:07:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:35.433 07:07:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:35.433 07:07:19 -- common/autotest_common.sh@10 -- # set +x 00:14:35.433 07:07:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.433 07:07:19 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:35.433 [2024-07-11 07:07:19.473442] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:35.433 [2024-07-11 07:07:19.473777] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:35.433 [2024-07-11 07:07:19.473968] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:35.691 07:07:19 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:35.691 07:07:19 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 622e3b31-cc45-4ed1-90dd-d490b37deb9b 00:14:35.691 07:07:19 -- common/autotest_common.sh@887 -- # local bdev_name=622e3b31-cc45-4ed1-90dd-d490b37deb9b 00:14:35.691 07:07:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:35.691 07:07:19 -- common/autotest_common.sh@889 -- # local i 00:14:35.691 07:07:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:35.691 07:07:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:35.691 07:07:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:35.691 07:07:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 622e3b31-cc45-4ed1-90dd-d490b37deb9b -t 2000 00:14:35.949 [ 00:14:35.949 { 00:14:35.949 "aliases": [ 00:14:35.949 "lvs/lvol" 00:14:35.949 ], 00:14:35.949 "assigned_rate_limits": { 00:14:35.949 "r_mbytes_per_sec": 0, 00:14:35.949 "rw_ios_per_sec": 0, 00:14:35.949 "rw_mbytes_per_sec": 0, 00:14:35.949 "w_mbytes_per_sec": 0 00:14:35.949 }, 00:14:35.949 "block_size": 4096, 00:14:35.949 "claimed": false, 00:14:35.949 "driver_specific": { 00:14:35.949 "lvol": { 00:14:35.950 "base_bdev": "aio_bdev", 00:14:35.950 "clone": false, 00:14:35.950 "esnap_clone": false, 00:14:35.950 "lvol_store_uuid": "b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b", 00:14:35.950 "snapshot": false, 00:14:35.950 "thin_provision": false 00:14:35.950 } 00:14:35.950 }, 00:14:35.950 "name": "622e3b31-cc45-4ed1-90dd-d490b37deb9b", 00:14:35.950 "num_blocks": 38912, 00:14:35.950 "product_name": "Logical Volume", 00:14:35.950 "supported_io_types": { 00:14:35.950 "abort": false, 00:14:35.950 "compare": false, 00:14:35.950 "compare_and_write": false, 00:14:35.950 "flush": false, 00:14:35.950 "nvme_admin": false, 00:14:35.950 "nvme_io": false, 00:14:35.950 "read": true, 00:14:35.950 "reset": true, 00:14:35.950 "unmap": true, 00:14:35.950 "write": true, 00:14:35.950 "write_zeroes": true 00:14:35.950 }, 00:14:35.950 "uuid": "622e3b31-cc45-4ed1-90dd-d490b37deb9b", 00:14:35.950 "zoned": false 00:14:35.950 } 00:14:35.950 ] 00:14:35.950 07:07:19 -- common/autotest_common.sh@895 -- # return 0 00:14:35.950 07:07:19 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:35.950 07:07:19 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:36.208 07:07:20 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:36.208 07:07:20 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:36.208 07:07:20 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:36.466 07:07:20 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:36.466 07:07:20 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:36.725 [2024-07-11 07:07:20.579426] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:36.725 07:07:20 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:36.725 07:07:20 -- common/autotest_common.sh@640 -- # local es=0 00:14:36.725 07:07:20 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:36.725 07:07:20 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.725 07:07:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.725 07:07:20 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.725 07:07:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.725 07:07:20 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.725 07:07:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.725 07:07:20 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.725 07:07:20 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:36.725 07:07:20 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:36.984 2024/07/11 07:07:20 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:36.984 request: 00:14:36.984 { 00:14:36.984 "method": "bdev_lvol_get_lvstores", 00:14:36.984 "params": { 00:14:36.984 "uuid": "b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b" 00:14:36.984 } 00:14:36.984 } 00:14:36.984 Got JSON-RPC error response 00:14:36.984 GoRPCClient: error on JSON-RPC call 00:14:36.984 07:07:20 -- common/autotest_common.sh@643 -- # es=1 00:14:36.984 07:07:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:36.984 07:07:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:36.984 07:07:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:36.984 07:07:20 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:36.984 aio_bdev 00:14:36.984 07:07:21 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 622e3b31-cc45-4ed1-90dd-d490b37deb9b 00:14:36.984 07:07:21 -- common/autotest_common.sh@887 -- # local bdev_name=622e3b31-cc45-4ed1-90dd-d490b37deb9b 00:14:36.984 07:07:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:36.984 07:07:21 -- common/autotest_common.sh@889 -- # local i 00:14:36.984 07:07:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:36.984 07:07:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:36.984 07:07:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:37.243 07:07:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 622e3b31-cc45-4ed1-90dd-d490b37deb9b -t 2000 00:14:37.502 [ 00:14:37.502 { 00:14:37.502 "aliases": [ 00:14:37.502 "lvs/lvol" 00:14:37.502 ], 00:14:37.502 "assigned_rate_limits": { 00:14:37.502 "r_mbytes_per_sec": 0, 00:14:37.502 "rw_ios_per_sec": 0, 00:14:37.502 "rw_mbytes_per_sec": 0, 00:14:37.502 "w_mbytes_per_sec": 0 00:14:37.502 }, 00:14:37.502 "block_size": 4096, 00:14:37.502 "claimed": false, 00:14:37.502 "driver_specific": { 00:14:37.502 "lvol": { 00:14:37.502 "base_bdev": "aio_bdev", 00:14:37.502 "clone": false, 00:14:37.502 "esnap_clone": false, 00:14:37.502 "lvol_store_uuid": "b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b", 00:14:37.502 "snapshot": false, 00:14:37.502 "thin_provision": false 00:14:37.502 } 00:14:37.502 }, 00:14:37.502 "name": "622e3b31-cc45-4ed1-90dd-d490b37deb9b", 00:14:37.502 "num_blocks": 38912, 00:14:37.502 "product_name": "Logical Volume", 00:14:37.502 "supported_io_types": { 00:14:37.502 "abort": false, 00:14:37.502 "compare": false, 00:14:37.502 "compare_and_write": false, 00:14:37.502 "flush": false, 00:14:37.502 "nvme_admin": false, 00:14:37.502 "nvme_io": false, 00:14:37.502 "read": true, 00:14:37.502 "reset": true, 00:14:37.502 "unmap": true, 00:14:37.502 "write": true, 00:14:37.502 "write_zeroes": true 00:14:37.502 }, 00:14:37.502 "uuid": "622e3b31-cc45-4ed1-90dd-d490b37deb9b", 00:14:37.502 "zoned": false 00:14:37.502 } 00:14:37.502 ] 00:14:37.502 07:07:21 -- common/autotest_common.sh@895 -- # return 0 00:14:37.502 07:07:21 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:37.502 07:07:21 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:37.760 07:07:21 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:37.760 07:07:21 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:37.760 07:07:21 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:38.019 07:07:21 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:38.019 07:07:21 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 622e3b31-cc45-4ed1-90dd-d490b37deb9b 00:14:38.019 07:07:22 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5137b5a-8d9c-4d3f-b22b-5e6128a6d41b 00:14:38.278 07:07:22 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:38.537 07:07:22 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:39.104 00:14:39.104 real 0m19.397s 00:14:39.104 user 0m38.267s 00:14:39.104 sys 0m9.332s 00:14:39.104 07:07:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.104 07:07:22 -- common/autotest_common.sh@10 -- # set +x 00:14:39.104 ************************************ 00:14:39.104 END TEST lvs_grow_dirty 00:14:39.104 ************************************ 00:14:39.104 07:07:22 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:39.104 07:07:22 -- common/autotest_common.sh@796 -- # type=--id 00:14:39.104 07:07:22 -- common/autotest_common.sh@797 -- # id=0 00:14:39.104 07:07:22 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:14:39.104 07:07:22 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:39.104 07:07:22 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:14:39.104 07:07:22 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:14:39.104 07:07:22 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:14:39.104 07:07:22 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:39.104 nvmf_trace.0 00:14:39.104 07:07:22 -- common/autotest_common.sh@811 -- # return 0 00:14:39.104 07:07:22 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:39.104 07:07:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:39.104 07:07:22 -- nvmf/common.sh@116 -- # sync 00:14:39.104 07:07:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:39.104 07:07:23 -- nvmf/common.sh@119 -- # set +e 00:14:39.104 07:07:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:39.104 07:07:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:39.104 rmmod nvme_tcp 00:14:39.104 rmmod nvme_fabrics 00:14:39.104 rmmod nvme_keyring 00:14:39.362 07:07:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:39.362 07:07:23 -- nvmf/common.sh@123 -- # set -e 00:14:39.363 07:07:23 -- nvmf/common.sh@124 -- # return 0 00:14:39.363 07:07:23 -- nvmf/common.sh@477 -- # '[' -n 73209 ']' 00:14:39.363 07:07:23 -- nvmf/common.sh@478 -- # killprocess 73209 00:14:39.363 07:07:23 -- common/autotest_common.sh@926 -- # '[' -z 73209 ']' 00:14:39.363 07:07:23 -- common/autotest_common.sh@930 -- # kill -0 73209 00:14:39.363 07:07:23 -- common/autotest_common.sh@931 -- # uname 00:14:39.363 07:07:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:39.363 07:07:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73209 00:14:39.363 07:07:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:39.363 07:07:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:39.363 07:07:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73209' 00:14:39.363 killing process with pid 73209 00:14:39.363 07:07:23 -- common/autotest_common.sh@945 -- # kill 73209 00:14:39.363 07:07:23 -- common/autotest_common.sh@950 -- # wait 73209 00:14:39.363 07:07:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:39.363 07:07:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:39.363 07:07:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:39.363 07:07:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.363 07:07:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:39.363 07:07:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.363 07:07:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.363 07:07:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.622 07:07:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:39.622 00:14:39.622 real 0m38.905s 00:14:39.622 user 1m0.456s 00:14:39.622 sys 0m12.091s 00:14:39.622 07:07:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.622 07:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:39.622 ************************************ 00:14:39.622 END TEST nvmf_lvs_grow 00:14:39.622 ************************************ 00:14:39.622 07:07:23 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:39.622 07:07:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:39.622 07:07:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:39.622 07:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:39.622 ************************************ 00:14:39.622 START TEST nvmf_bdev_io_wait 00:14:39.622 ************************************ 00:14:39.622 07:07:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:39.622 * Looking for test storage... 00:14:39.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:39.622 07:07:23 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:39.622 07:07:23 -- nvmf/common.sh@7 -- # uname -s 00:14:39.622 07:07:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.622 07:07:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.622 07:07:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.622 07:07:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.622 07:07:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.622 07:07:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.622 07:07:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.622 07:07:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.622 07:07:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.622 07:07:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.622 07:07:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:14:39.622 07:07:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:14:39.622 07:07:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.622 07:07:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.622 07:07:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:39.622 07:07:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:39.622 07:07:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.622 07:07:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.622 07:07:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.622 07:07:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.622 07:07:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.622 07:07:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.622 07:07:23 -- paths/export.sh@5 -- # export PATH 00:14:39.622 07:07:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.622 07:07:23 -- nvmf/common.sh@46 -- # : 0 00:14:39.622 07:07:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:39.622 07:07:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:39.622 07:07:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:39.622 07:07:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.622 07:07:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.622 07:07:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:39.622 07:07:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:39.622 07:07:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:39.622 07:07:23 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:39.622 07:07:23 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:39.622 07:07:23 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:39.622 07:07:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:39.622 07:07:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.622 07:07:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:39.622 07:07:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:39.622 07:07:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:39.622 07:07:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.622 07:07:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.622 07:07:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.622 07:07:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:39.622 07:07:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:39.622 07:07:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:39.622 07:07:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:39.622 07:07:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:39.622 07:07:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:39.622 07:07:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.622 07:07:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.622 07:07:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:39.622 07:07:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:39.622 07:07:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:39.622 07:07:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:39.622 07:07:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:39.622 07:07:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.622 07:07:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:39.622 07:07:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:39.622 07:07:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:39.622 07:07:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:39.622 07:07:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:39.622 07:07:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:39.622 Cannot find device "nvmf_tgt_br" 00:14:39.622 07:07:23 -- nvmf/common.sh@154 -- # true 00:14:39.622 07:07:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:39.880 Cannot find device "nvmf_tgt_br2" 00:14:39.880 07:07:23 -- nvmf/common.sh@155 -- # true 00:14:39.880 07:07:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:39.880 07:07:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:39.880 Cannot find device "nvmf_tgt_br" 00:14:39.880 07:07:23 -- nvmf/common.sh@157 -- # true 00:14:39.880 07:07:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:39.880 Cannot find device "nvmf_tgt_br2" 00:14:39.880 07:07:23 -- nvmf/common.sh@158 -- # true 00:14:39.880 07:07:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:39.880 07:07:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:39.880 07:07:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:39.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.880 07:07:23 -- nvmf/common.sh@161 -- # true 00:14:39.880 07:07:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:39.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.880 07:07:23 -- nvmf/common.sh@162 -- # true 00:14:39.880 07:07:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:39.880 07:07:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:39.880 07:07:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:39.880 07:07:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:39.880 07:07:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:39.880 07:07:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:39.880 07:07:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:39.880 07:07:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:39.880 07:07:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:39.880 07:07:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:39.880 07:07:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:39.880 07:07:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:39.880 07:07:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:39.880 07:07:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:39.880 07:07:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:39.880 07:07:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:39.880 07:07:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:39.880 07:07:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:39.880 07:07:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:40.161 07:07:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:40.161 07:07:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:40.161 07:07:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:40.161 07:07:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:40.161 07:07:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:40.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:14:40.161 00:14:40.161 --- 10.0.0.2 ping statistics --- 00:14:40.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.161 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:40.161 07:07:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:40.161 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:40.161 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:14:40.161 00:14:40.161 --- 10.0.0.3 ping statistics --- 00:14:40.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.161 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:40.161 07:07:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:40.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:40.161 00:14:40.161 --- 10.0.0.1 ping statistics --- 00:14:40.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.161 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:40.161 07:07:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.161 07:07:23 -- nvmf/common.sh@421 -- # return 0 00:14:40.161 07:07:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:40.161 07:07:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.161 07:07:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:40.161 07:07:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:40.161 07:07:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.161 07:07:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:40.161 07:07:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:40.161 07:07:24 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:40.161 07:07:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:40.161 07:07:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:40.161 07:07:24 -- common/autotest_common.sh@10 -- # set +x 00:14:40.161 07:07:24 -- nvmf/common.sh@469 -- # nvmfpid=73617 00:14:40.161 07:07:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:40.161 07:07:24 -- nvmf/common.sh@470 -- # waitforlisten 73617 00:14:40.161 07:07:24 -- common/autotest_common.sh@819 -- # '[' -z 73617 ']' 00:14:40.161 07:07:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.161 07:07:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:40.161 07:07:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.161 07:07:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:40.161 07:07:24 -- common/autotest_common.sh@10 -- # set +x 00:14:40.161 [2024-07-11 07:07:24.086003] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:40.161 [2024-07-11 07:07:24.086089] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.420 [2024-07-11 07:07:24.225171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.420 [2024-07-11 07:07:24.320345] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:40.420 [2024-07-11 07:07:24.320664] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.420 [2024-07-11 07:07:24.320681] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.420 [2024-07-11 07:07:24.320690] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.420 [2024-07-11 07:07:24.320794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.420 [2024-07-11 07:07:24.321003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.420 [2024-07-11 07:07:24.321311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:40.420 [2024-07-11 07:07:24.321316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.987 07:07:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:40.987 07:07:25 -- common/autotest_common.sh@852 -- # return 0 00:14:40.987 07:07:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:40.987 07:07:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:40.987 07:07:25 -- common/autotest_common.sh@10 -- # set +x 00:14:41.246 07:07:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.246 07:07:25 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:41.247 07:07:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.247 07:07:25 -- common/autotest_common.sh@10 -- # set +x 00:14:41.247 07:07:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:41.247 07:07:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.247 07:07:25 -- common/autotest_common.sh@10 -- # set +x 00:14:41.247 07:07:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:41.247 07:07:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.247 07:07:25 -- common/autotest_common.sh@10 -- # set +x 00:14:41.247 [2024-07-11 07:07:25.179328] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.247 07:07:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:41.247 07:07:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.247 07:07:25 -- common/autotest_common.sh@10 -- # set +x 00:14:41.247 Malloc0 00:14:41.247 07:07:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:41.247 07:07:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.247 07:07:25 -- common/autotest_common.sh@10 -- # set +x 00:14:41.247 07:07:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:41.247 07:07:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.247 07:07:25 -- common/autotest_common.sh@10 -- # set +x 00:14:41.247 07:07:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.247 07:07:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.247 07:07:25 -- common/autotest_common.sh@10 -- # set +x 00:14:41.247 [2024-07-11 07:07:25.236699] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.247 07:07:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73672 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:41.247 07:07:25 -- nvmf/common.sh@520 -- # config=() 00:14:41.247 07:07:25 -- nvmf/common.sh@520 -- # local subsystem config 00:14:41.247 07:07:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:41.247 07:07:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:41.247 { 00:14:41.247 "params": { 00:14:41.247 "name": "Nvme$subsystem", 00:14:41.247 "trtype": "$TEST_TRANSPORT", 00:14:41.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:41.247 "adrfam": "ipv4", 00:14:41.247 "trsvcid": "$NVMF_PORT", 00:14:41.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:41.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:41.247 "hdgst": ${hdgst:-false}, 00:14:41.247 "ddgst": ${ddgst:-false} 00:14:41.247 }, 00:14:41.247 "method": "bdev_nvme_attach_controller" 00:14:41.247 } 00:14:41.247 EOF 00:14:41.247 )") 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@30 -- # READ_PID=73675 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:41.247 07:07:25 -- nvmf/common.sh@520 -- # config=() 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73679 00:14:41.247 07:07:25 -- nvmf/common.sh@520 -- # local subsystem config 00:14:41.247 07:07:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:41.247 07:07:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:41.247 { 00:14:41.247 "params": { 00:14:41.247 "name": "Nvme$subsystem", 00:14:41.247 "trtype": "$TEST_TRANSPORT", 00:14:41.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:41.247 "adrfam": "ipv4", 00:14:41.247 "trsvcid": "$NVMF_PORT", 00:14:41.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:41.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:41.247 "hdgst": ${hdgst:-false}, 00:14:41.247 "ddgst": ${ddgst:-false} 00:14:41.247 }, 00:14:41.247 "method": "bdev_nvme_attach_controller" 00:14:41.247 } 00:14:41.247 EOF 00:14:41.247 )") 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73680 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@35 -- # sync 00:14:41.247 07:07:25 -- nvmf/common.sh@542 -- # cat 00:14:41.247 07:07:25 -- nvmf/common.sh@542 -- # cat 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:41.247 07:07:25 -- nvmf/common.sh@544 -- # jq . 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:41.247 07:07:25 -- nvmf/common.sh@520 -- # config=() 00:14:41.247 07:07:25 -- nvmf/common.sh@520 -- # local subsystem config 00:14:41.247 07:07:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:41.247 07:07:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:41.247 { 00:14:41.247 "params": { 00:14:41.247 "name": "Nvme$subsystem", 00:14:41.247 "trtype": "$TEST_TRANSPORT", 00:14:41.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:41.247 "adrfam": "ipv4", 00:14:41.247 "trsvcid": "$NVMF_PORT", 00:14:41.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:41.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:41.247 "hdgst": ${hdgst:-false}, 00:14:41.247 "ddgst": ${ddgst:-false} 00:14:41.247 }, 00:14:41.247 "method": "bdev_nvme_attach_controller" 00:14:41.247 } 00:14:41.247 EOF 00:14:41.247 )") 00:14:41.247 07:07:25 -- nvmf/common.sh@545 -- # IFS=, 00:14:41.247 07:07:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:41.247 "params": { 00:14:41.247 "name": "Nvme1", 00:14:41.247 "trtype": "tcp", 00:14:41.247 "traddr": "10.0.0.2", 00:14:41.247 "adrfam": "ipv4", 00:14:41.247 "trsvcid": "4420", 00:14:41.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:41.247 "hdgst": false, 00:14:41.247 "ddgst": false 00:14:41.247 }, 00:14:41.247 "method": "bdev_nvme_attach_controller" 00:14:41.247 }' 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:41.247 07:07:25 -- nvmf/common.sh@520 -- # config=() 00:14:41.247 07:07:25 -- nvmf/common.sh@520 -- # local subsystem config 00:14:41.247 07:07:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:41.247 07:07:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:41.247 { 00:14:41.247 "params": { 00:14:41.247 "name": "Nvme$subsystem", 00:14:41.247 "trtype": "$TEST_TRANSPORT", 00:14:41.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:41.247 "adrfam": "ipv4", 00:14:41.247 "trsvcid": "$NVMF_PORT", 00:14:41.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:41.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:41.247 "hdgst": ${hdgst:-false}, 00:14:41.247 "ddgst": ${ddgst:-false} 00:14:41.247 }, 00:14:41.247 "method": "bdev_nvme_attach_controller" 00:14:41.247 } 00:14:41.247 EOF 00:14:41.247 )") 00:14:41.247 07:07:25 -- nvmf/common.sh@542 -- # cat 00:14:41.247 07:07:25 -- nvmf/common.sh@542 -- # cat 00:14:41.247 07:07:25 -- nvmf/common.sh@544 -- # jq . 00:14:41.247 07:07:25 -- nvmf/common.sh@544 -- # jq . 00:14:41.247 07:07:25 -- nvmf/common.sh@545 -- # IFS=, 00:14:41.247 07:07:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:41.247 "params": { 00:14:41.247 "name": "Nvme1", 00:14:41.247 "trtype": "tcp", 00:14:41.247 "traddr": "10.0.0.2", 00:14:41.247 "adrfam": "ipv4", 00:14:41.247 "trsvcid": "4420", 00:14:41.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:41.247 "hdgst": false, 00:14:41.247 "ddgst": false 00:14:41.247 }, 00:14:41.247 "method": "bdev_nvme_attach_controller" 00:14:41.247 }' 00:14:41.247 07:07:25 -- nvmf/common.sh@544 -- # jq . 00:14:41.247 07:07:25 -- nvmf/common.sh@545 -- # IFS=, 00:14:41.247 07:07:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:41.247 "params": { 00:14:41.247 "name": "Nvme1", 00:14:41.247 "trtype": "tcp", 00:14:41.247 "traddr": "10.0.0.2", 00:14:41.247 "adrfam": "ipv4", 00:14:41.247 "trsvcid": "4420", 00:14:41.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:41.247 "hdgst": false, 00:14:41.247 "ddgst": false 00:14:41.247 }, 00:14:41.247 "method": "bdev_nvme_attach_controller" 00:14:41.247 }' 00:14:41.247 07:07:25 -- nvmf/common.sh@545 -- # IFS=, 00:14:41.247 07:07:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:41.247 "params": { 00:14:41.247 "name": "Nvme1", 00:14:41.247 "trtype": "tcp", 00:14:41.247 "traddr": "10.0.0.2", 00:14:41.247 "adrfam": "ipv4", 00:14:41.247 "trsvcid": "4420", 00:14:41.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:41.247 "hdgst": false, 00:14:41.247 "ddgst": false 00:14:41.247 }, 00:14:41.247 "method": "bdev_nvme_attach_controller" 00:14:41.247 }' 00:14:41.247 07:07:25 -- target/bdev_io_wait.sh@37 -- # wait 73672 00:14:41.247 [2024-07-11 07:07:25.301596] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:41.247 [2024-07-11 07:07:25.301675] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:41.506 [2024-07-11 07:07:25.317917] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:41.507 [2024-07-11 07:07:25.318180] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:41.507 [2024-07-11 07:07:25.324659] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:41.507 [2024-07-11 07:07:25.324728] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:41.507 [2024-07-11 07:07:25.339164] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:41.507 [2024-07-11 07:07:25.339260] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:41.507 [2024-07-11 07:07:25.522000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.775 [2024-07-11 07:07:25.590334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.775 [2024-07-11 07:07:25.646367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:41.775 [2024-07-11 07:07:25.665023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.775 [2024-07-11 07:07:25.692867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:41.775 [2024-07-11 07:07:25.739238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.775 [2024-07-11 07:07:25.767945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:41.775 Running I/O for 1 seconds... 00:14:41.775 [2024-07-11 07:07:25.827035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:42.035 Running I/O for 1 seconds... 00:14:42.035 Running I/O for 1 seconds... 00:14:42.035 Running I/O for 1 seconds... 00:14:42.999 00:14:42.999 Latency(us) 00:14:42.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.999 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:42.999 Nvme1n1 : 1.01 7238.16 28.27 0.00 0.00 17600.46 8757.99 28597.53 00:14:42.999 =================================================================================================================== 00:14:42.999 Total : 7238.16 28.27 0.00 0.00 17600.46 8757.99 28597.53 00:14:42.999 00:14:42.999 Latency(us) 00:14:42.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.999 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:42.999 Nvme1n1 : 1.01 6175.25 24.12 0.00 0.00 20582.05 6047.19 26452.71 00:14:42.999 =================================================================================================================== 00:14:42.999 Total : 6175.25 24.12 0.00 0.00 20582.05 6047.19 26452.71 00:14:42.999 00:14:42.999 Latency(us) 00:14:42.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.999 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:42.999 Nvme1n1 : 1.01 7051.68 27.55 0.00 0.00 18088.45 5957.82 30980.65 00:14:42.999 =================================================================================================================== 00:14:42.999 Total : 7051.68 27.55 0.00 0.00 18088.45 5957.82 30980.65 00:14:42.999 00:14:42.999 Latency(us) 00:14:42.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.999 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:42.999 Nvme1n1 : 1.00 227319.94 887.97 0.00 0.00 560.93 221.56 1117.09 00:14:42.999 =================================================================================================================== 00:14:42.999 Total : 227319.94 887.97 0.00 0.00 560.93 221.56 1117.09 00:14:43.258 07:07:27 -- target/bdev_io_wait.sh@38 -- # wait 73675 00:14:43.258 07:07:27 -- target/bdev_io_wait.sh@39 -- # wait 73679 00:14:43.516 07:07:27 -- target/bdev_io_wait.sh@40 -- # wait 73680 00:14:43.516 07:07:27 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.516 07:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.516 07:07:27 -- common/autotest_common.sh@10 -- # set +x 00:14:43.516 07:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.516 07:07:27 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:43.516 07:07:27 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:43.516 07:07:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:43.516 07:07:27 -- nvmf/common.sh@116 -- # sync 00:14:43.516 07:07:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:43.516 07:07:27 -- nvmf/common.sh@119 -- # set +e 00:14:43.516 07:07:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:43.516 07:07:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:43.516 rmmod nvme_tcp 00:14:43.516 rmmod nvme_fabrics 00:14:43.516 rmmod nvme_keyring 00:14:43.516 07:07:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:43.516 07:07:27 -- nvmf/common.sh@123 -- # set -e 00:14:43.516 07:07:27 -- nvmf/common.sh@124 -- # return 0 00:14:43.516 07:07:27 -- nvmf/common.sh@477 -- # '[' -n 73617 ']' 00:14:43.516 07:07:27 -- nvmf/common.sh@478 -- # killprocess 73617 00:14:43.516 07:07:27 -- common/autotest_common.sh@926 -- # '[' -z 73617 ']' 00:14:43.516 07:07:27 -- common/autotest_common.sh@930 -- # kill -0 73617 00:14:43.516 07:07:27 -- common/autotest_common.sh@931 -- # uname 00:14:43.516 07:07:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:43.516 07:07:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73617 00:14:43.516 07:07:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:43.516 07:07:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:43.516 killing process with pid 73617 00:14:43.516 07:07:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73617' 00:14:43.516 07:07:27 -- common/autotest_common.sh@945 -- # kill 73617 00:14:43.516 07:07:27 -- common/autotest_common.sh@950 -- # wait 73617 00:14:43.774 07:07:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:43.774 07:07:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:43.774 07:07:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:43.774 07:07:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.774 07:07:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:43.774 07:07:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.774 07:07:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.774 07:07:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.774 07:07:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:43.774 ************************************ 00:14:43.774 END TEST nvmf_bdev_io_wait 00:14:43.774 ************************************ 00:14:43.775 00:14:43.775 real 0m4.250s 00:14:43.775 user 0m18.711s 00:14:43.775 sys 0m2.026s 00:14:43.775 07:07:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.775 07:07:27 -- common/autotest_common.sh@10 -- # set +x 00:14:43.775 07:07:27 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:43.775 07:07:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:43.775 07:07:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:43.775 07:07:27 -- common/autotest_common.sh@10 -- # set +x 00:14:44.033 ************************************ 00:14:44.033 START TEST nvmf_queue_depth 00:14:44.033 ************************************ 00:14:44.033 07:07:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:44.033 * Looking for test storage... 00:14:44.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:44.033 07:07:27 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:44.033 07:07:27 -- nvmf/common.sh@7 -- # uname -s 00:14:44.033 07:07:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.033 07:07:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.033 07:07:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.033 07:07:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.033 07:07:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.033 07:07:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.033 07:07:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.033 07:07:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.033 07:07:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.033 07:07:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.033 07:07:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:14:44.033 07:07:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:14:44.033 07:07:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.033 07:07:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.033 07:07:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:44.033 07:07:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:44.034 07:07:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.034 07:07:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.034 07:07:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.034 07:07:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.034 07:07:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.034 07:07:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.034 07:07:27 -- paths/export.sh@5 -- # export PATH 00:14:44.034 07:07:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.034 07:07:27 -- nvmf/common.sh@46 -- # : 0 00:14:44.034 07:07:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:44.034 07:07:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:44.034 07:07:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:44.034 07:07:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.034 07:07:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.034 07:07:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:44.034 07:07:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:44.034 07:07:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:44.034 07:07:27 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:44.034 07:07:27 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:44.034 07:07:27 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:44.034 07:07:27 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:44.034 07:07:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:44.034 07:07:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.034 07:07:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:44.034 07:07:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:44.034 07:07:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:44.034 07:07:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.034 07:07:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.034 07:07:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.034 07:07:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:44.034 07:07:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:44.034 07:07:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:44.034 07:07:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:44.034 07:07:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:44.034 07:07:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:44.034 07:07:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.034 07:07:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.034 07:07:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:44.034 07:07:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:44.034 07:07:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:44.034 07:07:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:44.034 07:07:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:44.034 07:07:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.034 07:07:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:44.034 07:07:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:44.034 07:07:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:44.034 07:07:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:44.034 07:07:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:44.034 07:07:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:44.034 Cannot find device "nvmf_tgt_br" 00:14:44.034 07:07:27 -- nvmf/common.sh@154 -- # true 00:14:44.034 07:07:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:44.034 Cannot find device "nvmf_tgt_br2" 00:14:44.034 07:07:27 -- nvmf/common.sh@155 -- # true 00:14:44.034 07:07:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:44.034 07:07:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:44.034 Cannot find device "nvmf_tgt_br" 00:14:44.034 07:07:27 -- nvmf/common.sh@157 -- # true 00:14:44.034 07:07:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:44.034 Cannot find device "nvmf_tgt_br2" 00:14:44.034 07:07:28 -- nvmf/common.sh@158 -- # true 00:14:44.034 07:07:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:44.034 07:07:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:44.034 07:07:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:44.034 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:44.034 07:07:28 -- nvmf/common.sh@161 -- # true 00:14:44.034 07:07:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:44.034 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:44.034 07:07:28 -- nvmf/common.sh@162 -- # true 00:14:44.034 07:07:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:44.034 07:07:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:44.034 07:07:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:44.034 07:07:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:44.293 07:07:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:44.293 07:07:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:44.293 07:07:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:44.293 07:07:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:44.293 07:07:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:44.293 07:07:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:44.293 07:07:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:44.293 07:07:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:44.293 07:07:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:44.293 07:07:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:44.293 07:07:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:44.293 07:07:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:44.293 07:07:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:44.293 07:07:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:44.293 07:07:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:44.293 07:07:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:44.293 07:07:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:44.293 07:07:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:44.293 07:07:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:44.293 07:07:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:44.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:14:44.293 00:14:44.293 --- 10.0.0.2 ping statistics --- 00:14:44.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.293 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:44.293 07:07:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:44.293 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:44.293 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:14:44.293 00:14:44.293 --- 10.0.0.3 ping statistics --- 00:14:44.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.293 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:44.293 07:07:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:44.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:44.293 00:14:44.293 --- 10.0.0.1 ping statistics --- 00:14:44.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.293 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:44.293 07:07:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.293 07:07:28 -- nvmf/common.sh@421 -- # return 0 00:14:44.293 07:07:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:44.293 07:07:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.293 07:07:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:44.293 07:07:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:44.293 07:07:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.293 07:07:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:44.293 07:07:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:44.293 07:07:28 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:44.293 07:07:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:44.293 07:07:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:44.293 07:07:28 -- common/autotest_common.sh@10 -- # set +x 00:14:44.293 07:07:28 -- nvmf/common.sh@469 -- # nvmfpid=73913 00:14:44.293 07:07:28 -- nvmf/common.sh@470 -- # waitforlisten 73913 00:14:44.294 07:07:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:44.294 07:07:28 -- common/autotest_common.sh@819 -- # '[' -z 73913 ']' 00:14:44.294 07:07:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.294 07:07:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:44.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.294 07:07:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.294 07:07:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:44.294 07:07:28 -- common/autotest_common.sh@10 -- # set +x 00:14:44.294 [2024-07-11 07:07:28.317937] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:44.294 [2024-07-11 07:07:28.318016] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.552 [2024-07-11 07:07:28.451074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.552 [2024-07-11 07:07:28.532691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:44.552 [2024-07-11 07:07:28.532838] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.552 [2024-07-11 07:07:28.532851] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.552 [2024-07-11 07:07:28.532859] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.552 [2024-07-11 07:07:28.532891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.119 07:07:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:45.119 07:07:29 -- common/autotest_common.sh@852 -- # return 0 00:14:45.119 07:07:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:45.119 07:07:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:45.119 07:07:29 -- common/autotest_common.sh@10 -- # set +x 00:14:45.382 07:07:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.382 07:07:29 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.382 07:07:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.382 07:07:29 -- common/autotest_common.sh@10 -- # set +x 00:14:45.382 [2024-07-11 07:07:29.215744] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.382 07:07:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.382 07:07:29 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:45.382 07:07:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.382 07:07:29 -- common/autotest_common.sh@10 -- # set +x 00:14:45.382 Malloc0 00:14:45.382 07:07:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.382 07:07:29 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:45.382 07:07:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.382 07:07:29 -- common/autotest_common.sh@10 -- # set +x 00:14:45.382 07:07:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.382 07:07:29 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:45.382 07:07:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.382 07:07:29 -- common/autotest_common.sh@10 -- # set +x 00:14:45.382 07:07:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.382 07:07:29 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.382 07:07:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.382 07:07:29 -- common/autotest_common.sh@10 -- # set +x 00:14:45.382 [2024-07-11 07:07:29.279834] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.382 07:07:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.382 07:07:29 -- target/queue_depth.sh@30 -- # bdevperf_pid=73963 00:14:45.382 07:07:29 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:45.382 07:07:29 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:45.382 07:07:29 -- target/queue_depth.sh@33 -- # waitforlisten 73963 /var/tmp/bdevperf.sock 00:14:45.382 07:07:29 -- common/autotest_common.sh@819 -- # '[' -z 73963 ']' 00:14:45.382 07:07:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:45.382 07:07:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:45.382 07:07:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:45.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:45.382 07:07:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:45.382 07:07:29 -- common/autotest_common.sh@10 -- # set +x 00:14:45.382 [2024-07-11 07:07:29.334117] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:45.382 [2024-07-11 07:07:29.334194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73963 ] 00:14:45.664 [2024-07-11 07:07:29.469977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.664 [2024-07-11 07:07:29.584328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.252 07:07:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:46.252 07:07:30 -- common/autotest_common.sh@852 -- # return 0 00:14:46.252 07:07:30 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:46.252 07:07:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.252 07:07:30 -- common/autotest_common.sh@10 -- # set +x 00:14:46.510 NVMe0n1 00:14:46.510 07:07:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.510 07:07:30 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:46.510 Running I/O for 10 seconds... 00:14:56.482 00:14:56.482 Latency(us) 00:14:56.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.482 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:56.482 Verification LBA range: start 0x0 length 0x4000 00:14:56.482 NVMe0n1 : 10.05 16951.09 66.22 0.00 0.00 60223.03 11021.96 71017.19 00:14:56.482 =================================================================================================================== 00:14:56.482 Total : 16951.09 66.22 0.00 0.00 60223.03 11021.96 71017.19 00:14:56.482 0 00:14:56.482 07:07:40 -- target/queue_depth.sh@39 -- # killprocess 73963 00:14:56.482 07:07:40 -- common/autotest_common.sh@926 -- # '[' -z 73963 ']' 00:14:56.482 07:07:40 -- common/autotest_common.sh@930 -- # kill -0 73963 00:14:56.482 07:07:40 -- common/autotest_common.sh@931 -- # uname 00:14:56.482 07:07:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:56.482 07:07:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73963 00:14:56.741 07:07:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:56.741 07:07:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:56.741 killing process with pid 73963 00:14:56.741 07:07:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73963' 00:14:56.741 Received shutdown signal, test time was about 10.000000 seconds 00:14:56.741 00:14:56.741 Latency(us) 00:14:56.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.741 =================================================================================================================== 00:14:56.741 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:56.741 07:07:40 -- common/autotest_common.sh@945 -- # kill 73963 00:14:56.741 07:07:40 -- common/autotest_common.sh@950 -- # wait 73963 00:14:57.000 07:07:40 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:57.000 07:07:40 -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:57.000 07:07:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:57.000 07:07:40 -- nvmf/common.sh@116 -- # sync 00:14:57.000 07:07:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:57.000 07:07:40 -- nvmf/common.sh@119 -- # set +e 00:14:57.000 07:07:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:57.000 07:07:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:57.000 rmmod nvme_tcp 00:14:57.000 rmmod nvme_fabrics 00:14:57.000 rmmod nvme_keyring 00:14:57.000 07:07:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:57.000 07:07:40 -- nvmf/common.sh@123 -- # set -e 00:14:57.000 07:07:40 -- nvmf/common.sh@124 -- # return 0 00:14:57.000 07:07:40 -- nvmf/common.sh@477 -- # '[' -n 73913 ']' 00:14:57.000 07:07:40 -- nvmf/common.sh@478 -- # killprocess 73913 00:14:57.000 07:07:40 -- common/autotest_common.sh@926 -- # '[' -z 73913 ']' 00:14:57.000 07:07:40 -- common/autotest_common.sh@930 -- # kill -0 73913 00:14:57.000 07:07:40 -- common/autotest_common.sh@931 -- # uname 00:14:57.000 07:07:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:57.000 07:07:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73913 00:14:57.000 07:07:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:57.000 07:07:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:57.000 killing process with pid 73913 00:14:57.000 07:07:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73913' 00:14:57.000 07:07:40 -- common/autotest_common.sh@945 -- # kill 73913 00:14:57.000 07:07:40 -- common/autotest_common.sh@950 -- # wait 73913 00:14:57.259 07:07:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:57.259 07:07:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:57.259 07:07:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:57.259 07:07:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.259 07:07:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:57.259 07:07:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.259 07:07:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.259 07:07:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.259 07:07:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:57.259 00:14:57.259 real 0m13.431s 00:14:57.259 user 0m22.313s 00:14:57.259 sys 0m2.610s 00:14:57.259 07:07:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.259 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:14:57.259 ************************************ 00:14:57.259 END TEST nvmf_queue_depth 00:14:57.259 ************************************ 00:14:57.259 07:07:41 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:57.259 07:07:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:57.259 07:07:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:57.259 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:14:57.259 ************************************ 00:14:57.259 START TEST nvmf_multipath 00:14:57.259 ************************************ 00:14:57.259 07:07:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:57.519 * Looking for test storage... 00:14:57.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:57.519 07:07:41 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:57.519 07:07:41 -- nvmf/common.sh@7 -- # uname -s 00:14:57.519 07:07:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.519 07:07:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.519 07:07:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.519 07:07:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.519 07:07:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.519 07:07:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.519 07:07:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.519 07:07:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.519 07:07:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.519 07:07:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.519 07:07:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:14:57.519 07:07:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:14:57.519 07:07:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.519 07:07:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.519 07:07:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:57.519 07:07:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:57.519 07:07:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.519 07:07:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.519 07:07:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.519 07:07:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.519 07:07:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.519 07:07:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.519 07:07:41 -- paths/export.sh@5 -- # export PATH 00:14:57.519 07:07:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.519 07:07:41 -- nvmf/common.sh@46 -- # : 0 00:14:57.519 07:07:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:57.519 07:07:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:57.519 07:07:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:57.519 07:07:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.519 07:07:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.519 07:07:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:57.519 07:07:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:57.519 07:07:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:57.519 07:07:41 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:57.519 07:07:41 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:57.519 07:07:41 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:57.519 07:07:41 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:57.519 07:07:41 -- target/multipath.sh@43 -- # nvmftestinit 00:14:57.519 07:07:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:57.519 07:07:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.519 07:07:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:57.519 07:07:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:57.519 07:07:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:57.519 07:07:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.519 07:07:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.519 07:07:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.519 07:07:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:57.519 07:07:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:57.519 07:07:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:57.519 07:07:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:57.519 07:07:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:57.519 07:07:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:57.519 07:07:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.519 07:07:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.519 07:07:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:57.519 07:07:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:57.519 07:07:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:57.519 07:07:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:57.519 07:07:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:57.520 07:07:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.520 07:07:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:57.520 07:07:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:57.520 07:07:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:57.520 07:07:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:57.520 07:07:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:57.520 07:07:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:57.520 Cannot find device "nvmf_tgt_br" 00:14:57.520 07:07:41 -- nvmf/common.sh@154 -- # true 00:14:57.520 07:07:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:57.520 Cannot find device "nvmf_tgt_br2" 00:14:57.520 07:07:41 -- nvmf/common.sh@155 -- # true 00:14:57.520 07:07:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:57.520 07:07:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:57.520 Cannot find device "nvmf_tgt_br" 00:14:57.520 07:07:41 -- nvmf/common.sh@157 -- # true 00:14:57.520 07:07:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:57.520 Cannot find device "nvmf_tgt_br2" 00:14:57.520 07:07:41 -- nvmf/common.sh@158 -- # true 00:14:57.520 07:07:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:57.520 07:07:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:57.520 07:07:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:57.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.520 07:07:41 -- nvmf/common.sh@161 -- # true 00:14:57.520 07:07:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:57.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.520 07:07:41 -- nvmf/common.sh@162 -- # true 00:14:57.520 07:07:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:57.520 07:07:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:57.520 07:07:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:57.780 07:07:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:57.780 07:07:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:57.780 07:07:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:57.780 07:07:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:57.780 07:07:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:57.780 07:07:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:57.780 07:07:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:57.780 07:07:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:57.780 07:07:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:57.780 07:07:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:57.780 07:07:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:57.780 07:07:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:57.780 07:07:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:57.780 07:07:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:57.780 07:07:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:57.780 07:07:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:57.780 07:07:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:57.780 07:07:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:57.780 07:07:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:57.780 07:07:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:57.780 07:07:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:57.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:14:57.780 00:14:57.780 --- 10.0.0.2 ping statistics --- 00:14:57.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.780 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:14:57.780 07:07:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:57.780 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:57.780 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:14:57.780 00:14:57.780 --- 10.0.0.3 ping statistics --- 00:14:57.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.780 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:57.780 07:07:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:57.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:14:57.780 00:14:57.780 --- 10.0.0.1 ping statistics --- 00:14:57.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.780 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:57.780 07:07:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.780 07:07:41 -- nvmf/common.sh@421 -- # return 0 00:14:57.780 07:07:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:57.780 07:07:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.780 07:07:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:57.780 07:07:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:57.780 07:07:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.780 07:07:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:57.780 07:07:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:57.780 07:07:41 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:14:57.780 07:07:41 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:14:57.780 07:07:41 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:14:57.780 07:07:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:57.780 07:07:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:57.780 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:14:57.780 07:07:41 -- nvmf/common.sh@469 -- # nvmfpid=74295 00:14:57.780 07:07:41 -- nvmf/common.sh@470 -- # waitforlisten 74295 00:14:57.780 07:07:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:57.780 07:07:41 -- common/autotest_common.sh@819 -- # '[' -z 74295 ']' 00:14:57.780 07:07:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.780 07:07:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:57.780 07:07:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.780 07:07:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:57.780 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:14:58.039 [2024-07-11 07:07:41.857602] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:58.039 [2024-07-11 07:07:41.857733] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.039 [2024-07-11 07:07:41.998379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.039 [2024-07-11 07:07:42.091306] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:58.039 [2024-07-11 07:07:42.091480] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.039 [2024-07-11 07:07:42.091493] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.039 [2024-07-11 07:07:42.091502] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.039 [2024-07-11 07:07:42.091954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.039 [2024-07-11 07:07:42.092110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.039 [2024-07-11 07:07:42.092261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.039 [2024-07-11 07:07:42.092271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.975 07:07:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:58.975 07:07:42 -- common/autotest_common.sh@852 -- # return 0 00:14:58.975 07:07:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:58.975 07:07:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:58.975 07:07:42 -- common/autotest_common.sh@10 -- # set +x 00:14:58.975 07:07:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.976 07:07:42 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:58.976 [2024-07-11 07:07:43.023650] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.233 07:07:43 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:59.233 Malloc0 00:14:59.233 07:07:43 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:14:59.491 07:07:43 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:59.748 07:07:43 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.005 [2024-07-11 07:07:43.826369] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.005 07:07:43 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:00.005 [2024-07-11 07:07:44.050655] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:00.262 07:07:44 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:00.262 07:07:44 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:00.520 07:07:44 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:00.520 07:07:44 -- common/autotest_common.sh@1177 -- # local i=0 00:15:00.520 07:07:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.520 07:07:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:00.520 07:07:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:03.052 07:07:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:03.052 07:07:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:03.052 07:07:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:03.052 07:07:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:03.052 07:07:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.052 07:07:46 -- common/autotest_common.sh@1187 -- # return 0 00:15:03.052 07:07:46 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:03.052 07:07:46 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:03.052 07:07:46 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:03.052 07:07:46 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:03.052 07:07:46 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:03.052 07:07:46 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:03.052 07:07:46 -- target/multipath.sh@38 -- # return 0 00:15:03.052 07:07:46 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:03.052 07:07:46 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:03.052 07:07:46 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:03.052 07:07:46 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:03.052 07:07:46 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:03.052 07:07:46 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:03.052 07:07:46 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:03.052 07:07:46 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:03.052 07:07:46 -- target/multipath.sh@22 -- # local timeout=20 00:15:03.052 07:07:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:03.052 07:07:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:03.052 07:07:46 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:03.052 07:07:46 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:03.052 07:07:46 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:03.052 07:07:46 -- target/multipath.sh@22 -- # local timeout=20 00:15:03.052 07:07:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:03.052 07:07:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:03.052 07:07:46 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:03.052 07:07:46 -- target/multipath.sh@85 -- # echo numa 00:15:03.052 07:07:46 -- target/multipath.sh@88 -- # fio_pid=74427 00:15:03.052 07:07:46 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:03.052 07:07:46 -- target/multipath.sh@90 -- # sleep 1 00:15:03.052 [global] 00:15:03.052 thread=1 00:15:03.052 invalidate=1 00:15:03.052 rw=randrw 00:15:03.052 time_based=1 00:15:03.052 runtime=6 00:15:03.052 ioengine=libaio 00:15:03.052 direct=1 00:15:03.052 bs=4096 00:15:03.052 iodepth=128 00:15:03.052 norandommap=0 00:15:03.052 numjobs=1 00:15:03.052 00:15:03.052 verify_dump=1 00:15:03.052 verify_backlog=512 00:15:03.052 verify_state_save=0 00:15:03.052 do_verify=1 00:15:03.052 verify=crc32c-intel 00:15:03.052 [job0] 00:15:03.052 filename=/dev/nvme0n1 00:15:03.052 Could not set queue depth (nvme0n1) 00:15:03.052 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:03.052 fio-3.35 00:15:03.052 Starting 1 thread 00:15:03.620 07:07:47 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:03.878 07:07:47 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:04.136 07:07:47 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:04.136 07:07:47 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:04.136 07:07:47 -- target/multipath.sh@22 -- # local timeout=20 00:15:04.136 07:07:47 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:04.136 07:07:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:04.136 07:07:48 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:04.136 07:07:48 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:04.136 07:07:48 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:04.136 07:07:48 -- target/multipath.sh@22 -- # local timeout=20 00:15:04.136 07:07:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:04.136 07:07:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:04.136 07:07:48 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:04.136 07:07:48 -- target/multipath.sh@25 -- # sleep 1s 00:15:05.072 07:07:49 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:05.072 07:07:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:05.072 07:07:49 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:05.072 07:07:49 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:05.331 07:07:49 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:05.590 07:07:49 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:05.590 07:07:49 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:05.590 07:07:49 -- target/multipath.sh@22 -- # local timeout=20 00:15:05.590 07:07:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:05.590 07:07:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:05.590 07:07:49 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:05.590 07:07:49 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:05.590 07:07:49 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:05.590 07:07:49 -- target/multipath.sh@22 -- # local timeout=20 00:15:05.590 07:07:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:05.590 07:07:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:05.590 07:07:49 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:05.590 07:07:49 -- target/multipath.sh@25 -- # sleep 1s 00:15:06.525 07:07:50 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:06.525 07:07:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:06.525 07:07:50 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:06.525 07:07:50 -- target/multipath.sh@104 -- # wait 74427 00:15:09.058 00:15:09.058 job0: (groupid=0, jobs=1): err= 0: pid=74448: Thu Jul 11 07:07:52 2024 00:15:09.058 read: IOPS=13.1k, BW=51.1MiB/s (53.5MB/s)(307MiB/6005msec) 00:15:09.058 slat (usec): min=7, max=4512, avg=43.20, stdev=187.13 00:15:09.058 clat (usec): min=753, max=15062, avg=6730.52, stdev=1063.34 00:15:09.058 lat (usec): min=808, max=15072, avg=6773.72, stdev=1069.02 00:15:09.058 clat percentiles (usec): 00:15:09.058 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5604], 20.00th=[ 5932], 00:15:09.058 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6652], 60.00th=[ 6915], 00:15:09.058 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7898], 95.00th=[ 8455], 00:15:09.058 | 99.00th=[ 9896], 99.50th=[10421], 99.90th=[14222], 99.95th=[14484], 00:15:09.058 | 99.99th=[15008] 00:15:09.058 bw ( KiB/s): min=10032, max=34192, per=51.63%, avg=27001.91, stdev=8327.24, samples=11 00:15:09.058 iops : min= 2508, max= 8548, avg=6750.45, stdev=2081.86, samples=11 00:15:09.058 write: IOPS=7685, BW=30.0MiB/s (31.5MB/s)(155MiB/5155msec); 0 zone resets 00:15:09.058 slat (usec): min=14, max=5164, avg=55.97, stdev=134.24 00:15:09.058 clat (usec): min=827, max=17783, avg=5884.82, stdev=902.47 00:15:09.058 lat (usec): min=861, max=17807, avg=5940.79, stdev=905.78 00:15:09.058 clat percentiles (usec): 00:15:09.058 | 1.00th=[ 3392], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 5342], 00:15:09.058 | 30.00th=[ 5538], 40.00th=[ 5735], 50.00th=[ 5932], 60.00th=[ 6063], 00:15:09.058 | 70.00th=[ 6259], 80.00th=[ 6456], 90.00th=[ 6718], 95.00th=[ 7046], 00:15:09.058 | 99.00th=[ 8586], 99.50th=[ 9503], 99.90th=[13173], 99.95th=[13960], 00:15:09.058 | 99.99th=[14877] 00:15:09.058 bw ( KiB/s): min=10288, max=33752, per=87.79%, avg=26988.18, stdev=8242.43, samples=11 00:15:09.058 iops : min= 2572, max= 8438, avg=6747.00, stdev=2060.65, samples=11 00:15:09.058 lat (usec) : 1000=0.01% 00:15:09.058 lat (msec) : 2=0.02%, 4=1.53%, 10=97.79%, 20=0.66% 00:15:09.058 cpu : usr=6.15%, sys=25.87%, ctx=7144, majf=0, minf=133 00:15:09.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:09.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.058 issued rwts: total=78505,39617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.058 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.058 00:15:09.058 Run status group 0 (all jobs): 00:15:09.058 READ: bw=51.1MiB/s (53.5MB/s), 51.1MiB/s-51.1MiB/s (53.5MB/s-53.5MB/s), io=307MiB (322MB), run=6005-6005msec 00:15:09.058 WRITE: bw=30.0MiB/s (31.5MB/s), 30.0MiB/s-30.0MiB/s (31.5MB/s-31.5MB/s), io=155MiB (162MB), run=5155-5155msec 00:15:09.058 00:15:09.058 Disk stats (read/write): 00:15:09.058 nvme0n1: ios=77548/38824, merge=0/0, ticks=483314/211066, in_queue=694380, util=99.32% 00:15:09.058 07:07:52 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:09.317 07:07:53 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:09.317 07:07:53 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:09.317 07:07:53 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:09.317 07:07:53 -- target/multipath.sh@22 -- # local timeout=20 00:15:09.317 07:07:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:09.317 07:07:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:09.317 07:07:53 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:09.317 07:07:53 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:09.317 07:07:53 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:09.317 07:07:53 -- target/multipath.sh@22 -- # local timeout=20 00:15:09.317 07:07:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:09.317 07:07:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:09.317 07:07:53 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:09.317 07:07:53 -- target/multipath.sh@25 -- # sleep 1s 00:15:10.691 07:07:54 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:10.691 07:07:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:10.691 07:07:54 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:10.691 07:07:54 -- target/multipath.sh@113 -- # echo round-robin 00:15:10.691 07:07:54 -- target/multipath.sh@116 -- # fio_pid=74579 00:15:10.691 07:07:54 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:10.691 07:07:54 -- target/multipath.sh@118 -- # sleep 1 00:15:10.691 [global] 00:15:10.691 thread=1 00:15:10.691 invalidate=1 00:15:10.691 rw=randrw 00:15:10.691 time_based=1 00:15:10.691 runtime=6 00:15:10.691 ioengine=libaio 00:15:10.691 direct=1 00:15:10.691 bs=4096 00:15:10.691 iodepth=128 00:15:10.691 norandommap=0 00:15:10.691 numjobs=1 00:15:10.691 00:15:10.691 verify_dump=1 00:15:10.691 verify_backlog=512 00:15:10.691 verify_state_save=0 00:15:10.691 do_verify=1 00:15:10.691 verify=crc32c-intel 00:15:10.691 [job0] 00:15:10.691 filename=/dev/nvme0n1 00:15:10.691 Could not set queue depth (nvme0n1) 00:15:10.691 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:10.691 fio-3.35 00:15:10.691 Starting 1 thread 00:15:11.626 07:07:55 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:11.626 07:07:55 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:11.884 07:07:55 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:11.884 07:07:55 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:11.884 07:07:55 -- target/multipath.sh@22 -- # local timeout=20 00:15:11.884 07:07:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:11.884 07:07:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:11.884 07:07:55 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:11.884 07:07:55 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:11.884 07:07:55 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:11.884 07:07:55 -- target/multipath.sh@22 -- # local timeout=20 00:15:11.884 07:07:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:11.884 07:07:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:11.884 07:07:55 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:11.884 07:07:55 -- target/multipath.sh@25 -- # sleep 1s 00:15:12.819 07:07:56 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:12.819 07:07:56 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:12.819 07:07:56 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:12.819 07:07:56 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:13.077 07:07:57 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:13.335 07:07:57 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:13.335 07:07:57 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:13.335 07:07:57 -- target/multipath.sh@22 -- # local timeout=20 00:15:13.335 07:07:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:13.335 07:07:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:13.335 07:07:57 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:13.335 07:07:57 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:13.335 07:07:57 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:13.335 07:07:57 -- target/multipath.sh@22 -- # local timeout=20 00:15:13.335 07:07:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:13.335 07:07:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:13.335 07:07:57 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:13.335 07:07:57 -- target/multipath.sh@25 -- # sleep 1s 00:15:14.271 07:07:58 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:14.271 07:07:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:14.271 07:07:58 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:14.271 07:07:58 -- target/multipath.sh@132 -- # wait 74579 00:15:16.799 00:15:16.799 job0: (groupid=0, jobs=1): err= 0: pid=74600: Thu Jul 11 07:08:00 2024 00:15:16.799 read: IOPS=13.4k, BW=52.5MiB/s (55.1MB/s)(315MiB/6000msec) 00:15:16.799 slat (usec): min=2, max=6366, avg=37.15, stdev=174.94 00:15:16.799 clat (usec): min=344, max=16656, avg=6564.92, stdev=1643.56 00:15:16.799 lat (usec): min=371, max=16668, avg=6602.08, stdev=1648.26 00:15:16.799 clat percentiles (usec): 00:15:16.799 | 1.00th=[ 2474], 5.00th=[ 3851], 10.00th=[ 4686], 20.00th=[ 5538], 00:15:16.799 | 30.00th=[ 5866], 40.00th=[ 6259], 50.00th=[ 6587], 60.00th=[ 6849], 00:15:16.799 | 70.00th=[ 7111], 80.00th=[ 7504], 90.00th=[ 8291], 95.00th=[ 9372], 00:15:16.799 | 99.00th=[11863], 99.50th=[12518], 99.90th=[14353], 99.95th=[15008], 00:15:16.799 | 99.99th=[15926] 00:15:16.799 bw ( KiB/s): min=11832, max=33373, per=50.56%, avg=27197.73, stdev=7019.56, samples=11 00:15:16.799 iops : min= 2958, max= 8343, avg=6799.36, stdev=1754.84, samples=11 00:15:16.799 write: IOPS=7798, BW=30.5MiB/s (31.9MB/s)(161MiB/5299msec); 0 zone resets 00:15:16.799 slat (usec): min=3, max=6035, avg=49.19, stdev=125.52 00:15:16.799 clat (usec): min=304, max=13127, avg=5597.23, stdev=1455.83 00:15:16.799 lat (usec): min=389, max=13151, avg=5646.42, stdev=1460.71 00:15:16.799 clat percentiles (usec): 00:15:16.799 | 1.00th=[ 2114], 5.00th=[ 2900], 10.00th=[ 3425], 20.00th=[ 4555], 00:15:16.799 | 30.00th=[ 5276], 40.00th=[ 5538], 50.00th=[ 5800], 60.00th=[ 5997], 00:15:16.799 | 70.00th=[ 6194], 80.00th=[ 6456], 90.00th=[ 6915], 95.00th=[ 7832], 00:15:16.799 | 99.00th=[ 9634], 99.50th=[10814], 99.90th=[12125], 99.95th=[12387], 00:15:16.799 | 99.99th=[12911] 00:15:16.799 bw ( KiB/s): min=12240, max=32806, per=87.05%, avg=27155.45, stdev=6738.89, samples=11 00:15:16.799 iops : min= 3060, max= 8201, avg=6788.82, stdev=1684.68, samples=11 00:15:16.799 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.05% 00:15:16.799 lat (msec) : 2=0.50%, 4=8.45%, 10=88.60%, 20=2.35% 00:15:16.799 cpu : usr=6.60%, sys=25.70%, ctx=8051, majf=0, minf=96 00:15:16.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:16.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.799 issued rwts: total=80681,41324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.799 00:15:16.799 Run status group 0 (all jobs): 00:15:16.799 READ: bw=52.5MiB/s (55.1MB/s), 52.5MiB/s-52.5MiB/s (55.1MB/s-55.1MB/s), io=315MiB (330MB), run=6000-6000msec 00:15:16.799 WRITE: bw=30.5MiB/s (31.9MB/s), 30.5MiB/s-30.5MiB/s (31.9MB/s-31.9MB/s), io=161MiB (169MB), run=5299-5299msec 00:15:16.799 00:15:16.799 Disk stats (read/write): 00:15:16.799 nvme0n1: ios=79674/40514, merge=0/0, ticks=483609/208008, in_queue=691617, util=98.55% 00:15:16.799 07:08:00 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:16.799 07:08:00 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.799 07:08:00 -- common/autotest_common.sh@1198 -- # local i=0 00:15:16.799 07:08:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:16.799 07:08:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.799 07:08:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:16.799 07:08:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.799 07:08:00 -- common/autotest_common.sh@1210 -- # return 0 00:15:16.799 07:08:00 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.057 07:08:00 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:17.057 07:08:00 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:17.057 07:08:00 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:17.057 07:08:00 -- target/multipath.sh@144 -- # nvmftestfini 00:15:17.057 07:08:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:17.057 07:08:00 -- nvmf/common.sh@116 -- # sync 00:15:17.057 07:08:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:17.057 07:08:01 -- nvmf/common.sh@119 -- # set +e 00:15:17.057 07:08:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:17.057 07:08:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:17.057 rmmod nvme_tcp 00:15:17.057 rmmod nvme_fabrics 00:15:17.057 rmmod nvme_keyring 00:15:17.057 07:08:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:17.057 07:08:01 -- nvmf/common.sh@123 -- # set -e 00:15:17.057 07:08:01 -- nvmf/common.sh@124 -- # return 0 00:15:17.057 07:08:01 -- nvmf/common.sh@477 -- # '[' -n 74295 ']' 00:15:17.057 07:08:01 -- nvmf/common.sh@478 -- # killprocess 74295 00:15:17.057 07:08:01 -- common/autotest_common.sh@926 -- # '[' -z 74295 ']' 00:15:17.057 07:08:01 -- common/autotest_common.sh@930 -- # kill -0 74295 00:15:17.057 07:08:01 -- common/autotest_common.sh@931 -- # uname 00:15:17.057 07:08:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:17.057 07:08:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74295 00:15:17.057 07:08:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:17.057 07:08:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:17.057 killing process with pid 74295 00:15:17.057 07:08:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74295' 00:15:17.057 07:08:01 -- common/autotest_common.sh@945 -- # kill 74295 00:15:17.057 07:08:01 -- common/autotest_common.sh@950 -- # wait 74295 00:15:17.315 07:08:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:17.315 07:08:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:17.315 07:08:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:17.315 07:08:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:17.315 07:08:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:17.315 07:08:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.315 07:08:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.315 07:08:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.576 07:08:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:17.576 00:15:17.576 real 0m20.086s 00:15:17.576 user 1m18.439s 00:15:17.576 sys 0m6.411s 00:15:17.576 07:08:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.576 ************************************ 00:15:17.576 END TEST nvmf_multipath 00:15:17.576 ************************************ 00:15:17.576 07:08:01 -- common/autotest_common.sh@10 -- # set +x 00:15:17.576 07:08:01 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:17.576 07:08:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:17.576 07:08:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:17.576 07:08:01 -- common/autotest_common.sh@10 -- # set +x 00:15:17.576 ************************************ 00:15:17.576 START TEST nvmf_zcopy 00:15:17.576 ************************************ 00:15:17.576 07:08:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:17.576 * Looking for test storage... 00:15:17.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:17.576 07:08:01 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.576 07:08:01 -- nvmf/common.sh@7 -- # uname -s 00:15:17.576 07:08:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.576 07:08:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.576 07:08:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.576 07:08:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.576 07:08:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.576 07:08:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.576 07:08:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.576 07:08:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.576 07:08:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.576 07:08:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.576 07:08:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:15:17.576 07:08:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:15:17.576 07:08:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.576 07:08:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.576 07:08:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.576 07:08:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.576 07:08:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.576 07:08:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.576 07:08:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.576 07:08:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.576 07:08:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.576 07:08:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.576 07:08:01 -- paths/export.sh@5 -- # export PATH 00:15:17.576 07:08:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.576 07:08:01 -- nvmf/common.sh@46 -- # : 0 00:15:17.576 07:08:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:17.576 07:08:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:17.576 07:08:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:17.576 07:08:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.576 07:08:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.576 07:08:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:17.576 07:08:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:17.576 07:08:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:17.576 07:08:01 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:17.576 07:08:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:17.576 07:08:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.576 07:08:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:17.576 07:08:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:17.576 07:08:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:17.576 07:08:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.576 07:08:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.576 07:08:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.576 07:08:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:17.576 07:08:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:17.576 07:08:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:17.576 07:08:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:17.576 07:08:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:17.576 07:08:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:17.576 07:08:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.576 07:08:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.576 07:08:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:17.576 07:08:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:17.576 07:08:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.576 07:08:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.576 07:08:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.576 07:08:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.576 07:08:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.576 07:08:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.576 07:08:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.576 07:08:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.576 07:08:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:17.576 07:08:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:17.576 Cannot find device "nvmf_tgt_br" 00:15:17.576 07:08:01 -- nvmf/common.sh@154 -- # true 00:15:17.576 07:08:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.576 Cannot find device "nvmf_tgt_br2" 00:15:17.576 07:08:01 -- nvmf/common.sh@155 -- # true 00:15:17.576 07:08:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:17.576 07:08:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:17.576 Cannot find device "nvmf_tgt_br" 00:15:17.576 07:08:01 -- nvmf/common.sh@157 -- # true 00:15:17.576 07:08:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:17.576 Cannot find device "nvmf_tgt_br2" 00:15:17.576 07:08:01 -- nvmf/common.sh@158 -- # true 00:15:17.576 07:08:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:17.877 07:08:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:17.877 07:08:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.877 07:08:01 -- nvmf/common.sh@161 -- # true 00:15:17.877 07:08:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.877 07:08:01 -- nvmf/common.sh@162 -- # true 00:15:17.877 07:08:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.877 07:08:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.877 07:08:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.877 07:08:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.877 07:08:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.877 07:08:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.877 07:08:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.877 07:08:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:17.877 07:08:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:17.877 07:08:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:17.877 07:08:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:17.877 07:08:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:17.877 07:08:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:17.877 07:08:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.877 07:08:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.877 07:08:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.877 07:08:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:17.877 07:08:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:17.877 07:08:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.877 07:08:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.877 07:08:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.877 07:08:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.877 07:08:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.877 07:08:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:17.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:15:17.877 00:15:17.877 --- 10.0.0.2 ping statistics --- 00:15:17.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.877 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:15:17.877 07:08:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:17.877 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.877 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:17.877 00:15:17.877 --- 10.0.0.3 ping statistics --- 00:15:17.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.877 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:17.877 07:08:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:17.877 00:15:17.877 --- 10.0.0.1 ping statistics --- 00:15:17.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.877 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:17.877 07:08:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.877 07:08:01 -- nvmf/common.sh@421 -- # return 0 00:15:17.877 07:08:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:17.877 07:08:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.877 07:08:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:17.877 07:08:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:17.877 07:08:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.877 07:08:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:17.877 07:08:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:17.877 07:08:01 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:17.877 07:08:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:17.877 07:08:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:17.877 07:08:01 -- common/autotest_common.sh@10 -- # set +x 00:15:17.877 07:08:01 -- nvmf/common.sh@469 -- # nvmfpid=74872 00:15:17.877 07:08:01 -- nvmf/common.sh@470 -- # waitforlisten 74872 00:15:17.877 07:08:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.877 07:08:01 -- common/autotest_common.sh@819 -- # '[' -z 74872 ']' 00:15:17.877 07:08:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.877 07:08:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:17.877 07:08:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.877 07:08:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:17.877 07:08:01 -- common/autotest_common.sh@10 -- # set +x 00:15:18.144 [2024-07-11 07:08:01.945960] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:18.144 [2024-07-11 07:08:01.946042] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.145 [2024-07-11 07:08:02.081082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.145 [2024-07-11 07:08:02.168190] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:18.145 [2024-07-11 07:08:02.168338] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.145 [2024-07-11 07:08:02.168351] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.145 [2024-07-11 07:08:02.168359] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.145 [2024-07-11 07:08:02.168385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.080 07:08:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:19.080 07:08:02 -- common/autotest_common.sh@852 -- # return 0 00:15:19.080 07:08:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:19.080 07:08:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:19.080 07:08:02 -- common/autotest_common.sh@10 -- # set +x 00:15:19.080 07:08:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.080 07:08:02 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:19.080 07:08:02 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:19.080 07:08:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.080 07:08:02 -- common/autotest_common.sh@10 -- # set +x 00:15:19.080 [2024-07-11 07:08:02.901079] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.080 07:08:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.080 07:08:02 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:19.080 07:08:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.080 07:08:02 -- common/autotest_common.sh@10 -- # set +x 00:15:19.080 07:08:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.080 07:08:02 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.080 07:08:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.080 07:08:02 -- common/autotest_common.sh@10 -- # set +x 00:15:19.080 [2024-07-11 07:08:02.917238] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.080 07:08:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.080 07:08:02 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:19.080 07:08:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.080 07:08:02 -- common/autotest_common.sh@10 -- # set +x 00:15:19.080 07:08:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.080 07:08:02 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:19.080 07:08:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.080 07:08:02 -- common/autotest_common.sh@10 -- # set +x 00:15:19.080 malloc0 00:15:19.080 07:08:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.080 07:08:02 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:19.080 07:08:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.080 07:08:02 -- common/autotest_common.sh@10 -- # set +x 00:15:19.080 07:08:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.080 07:08:02 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:19.080 07:08:02 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:19.080 07:08:02 -- nvmf/common.sh@520 -- # config=() 00:15:19.080 07:08:02 -- nvmf/common.sh@520 -- # local subsystem config 00:15:19.080 07:08:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:19.080 07:08:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:19.080 { 00:15:19.080 "params": { 00:15:19.080 "name": "Nvme$subsystem", 00:15:19.080 "trtype": "$TEST_TRANSPORT", 00:15:19.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:19.080 "adrfam": "ipv4", 00:15:19.080 "trsvcid": "$NVMF_PORT", 00:15:19.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:19.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:19.080 "hdgst": ${hdgst:-false}, 00:15:19.080 "ddgst": ${ddgst:-false} 00:15:19.080 }, 00:15:19.080 "method": "bdev_nvme_attach_controller" 00:15:19.080 } 00:15:19.080 EOF 00:15:19.080 )") 00:15:19.080 07:08:02 -- nvmf/common.sh@542 -- # cat 00:15:19.080 07:08:02 -- nvmf/common.sh@544 -- # jq . 00:15:19.080 07:08:02 -- nvmf/common.sh@545 -- # IFS=, 00:15:19.080 07:08:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:19.080 "params": { 00:15:19.080 "name": "Nvme1", 00:15:19.080 "trtype": "tcp", 00:15:19.080 "traddr": "10.0.0.2", 00:15:19.080 "adrfam": "ipv4", 00:15:19.080 "trsvcid": "4420", 00:15:19.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:19.080 "hdgst": false, 00:15:19.080 "ddgst": false 00:15:19.080 }, 00:15:19.080 "method": "bdev_nvme_attach_controller" 00:15:19.080 }' 00:15:19.080 [2024-07-11 07:08:03.009770] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:19.080 [2024-07-11 07:08:03.009850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74923 ] 00:15:19.080 [2024-07-11 07:08:03.138997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.339 [2024-07-11 07:08:03.224172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.597 Running I/O for 10 seconds... 00:15:29.568 00:15:29.568 Latency(us) 00:15:29.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.568 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:29.568 Verification LBA range: start 0x0 length 0x1000 00:15:29.568 Nvme1n1 : 10.01 10981.40 85.79 0.00 0.00 11628.04 1020.28 19660.80 00:15:29.568 =================================================================================================================== 00:15:29.568 Total : 10981.40 85.79 0.00 0.00 11628.04 1020.28 19660.80 00:15:29.828 07:08:13 -- target/zcopy.sh@39 -- # perfpid=75046 00:15:29.828 07:08:13 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:29.828 07:08:13 -- common/autotest_common.sh@10 -- # set +x 00:15:29.828 07:08:13 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:29.828 07:08:13 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:29.828 07:08:13 -- nvmf/common.sh@520 -- # config=() 00:15:29.828 07:08:13 -- nvmf/common.sh@520 -- # local subsystem config 00:15:29.828 07:08:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:29.828 07:08:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:29.828 { 00:15:29.828 "params": { 00:15:29.828 "name": "Nvme$subsystem", 00:15:29.828 "trtype": "$TEST_TRANSPORT", 00:15:29.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:29.828 "adrfam": "ipv4", 00:15:29.828 "trsvcid": "$NVMF_PORT", 00:15:29.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:29.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:29.828 "hdgst": ${hdgst:-false}, 00:15:29.828 "ddgst": ${ddgst:-false} 00:15:29.828 }, 00:15:29.828 "method": "bdev_nvme_attach_controller" 00:15:29.828 } 00:15:29.828 EOF 00:15:29.828 )") 00:15:29.828 07:08:13 -- nvmf/common.sh@542 -- # cat 00:15:29.828 [2024-07-11 07:08:13.662556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.662634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 07:08:13 -- nvmf/common.sh@544 -- # jq . 00:15:29.828 07:08:13 -- nvmf/common.sh@545 -- # IFS=, 00:15:29.828 07:08:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:29.828 "params": { 00:15:29.828 "name": "Nvme1", 00:15:29.828 "trtype": "tcp", 00:15:29.828 "traddr": "10.0.0.2", 00:15:29.828 "adrfam": "ipv4", 00:15:29.828 "trsvcid": "4420", 00:15:29.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:29.828 "hdgst": false, 00:15:29.828 "ddgst": false 00:15:29.828 }, 00:15:29.828 "method": "bdev_nvme_attach_controller" 00:15:29.828 }' 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.674484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.674511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.686474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.686499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.694472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.694495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.706479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.706503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.715585] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:29.828 [2024-07-11 07:08:13.715670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75046 ] 00:15:29.828 [2024-07-11 07:08:13.718481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.718505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.730480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.730504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.742486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.742510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.754483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.754513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.766495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.766520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.778485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.778508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.790486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.790509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.802492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.802517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.814492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.814515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.826495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.826518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.838496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.838519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.850496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.850519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.854772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.828 [2024-07-11 07:08:13.862498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.862521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.828 [2024-07-11 07:08:13.874503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.828 [2024-07-11 07:08:13.874526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.828 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:13.886504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:13.886537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:13.898512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:13.898547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:13.910509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:13.910532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:13.922512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:13.922535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:13.934257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.094 [2024-07-11 07:08:13.934516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:13.934530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:13.946516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:13.946539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:13.958523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:13.958558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:13.970523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:13.970546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:13.982526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:13.982549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:13.994530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:13.994553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:14.006532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:14.006555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:14.018536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:14.018559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:14.030540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:14.030562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:14.042566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:14.042594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:14.054560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:14.054586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:14.066567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:14.066595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:14.078582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:14.078610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:14.090573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:14.090599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:14.102585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:14.102614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 Running I/O for 5 seconds... 00:15:30.094 [2024-07-11 07:08:14.114577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:14.114601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:14.131049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:14.131079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.094 [2024-07-11 07:08:14.141337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.094 [2024-07-11 07:08:14.141366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.094 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.353 [2024-07-11 07:08:14.157601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.353 [2024-07-11 07:08:14.157630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.353 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.353 [2024-07-11 07:08:14.174210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.353 [2024-07-11 07:08:14.174240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.353 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.353 [2024-07-11 07:08:14.190527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.353 [2024-07-11 07:08:14.190557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.353 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.353 [2024-07-11 07:08:14.207388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.353 [2024-07-11 07:08:14.207418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.354 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.354 [2024-07-11 07:08:14.223722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.354 [2024-07-11 07:08:14.223764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.354 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.354 [2024-07-11 07:08:14.240284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.354 [2024-07-11 07:08:14.240315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.354 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.354 [2024-07-11 07:08:14.257363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.354 [2024-07-11 07:08:14.257405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.354 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.354 [2024-07-11 07:08:14.273664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.354 [2024-07-11 07:08:14.273695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.354 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.354 [2024-07-11 07:08:14.290110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.354 [2024-07-11 07:08:14.290141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.354 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.354 [2024-07-11 07:08:14.306291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.354 [2024-07-11 07:08:14.306330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.354 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.354 [2024-07-11 07:08:14.322543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.354 [2024-07-11 07:08:14.322573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.354 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.354 [2024-07-11 07:08:14.338762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.354 [2024-07-11 07:08:14.338792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.354 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.354 [2024-07-11 07:08:14.355479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.354 [2024-07-11 07:08:14.355508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.354 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.354 [2024-07-11 07:08:14.371795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.354 [2024-07-11 07:08:14.371825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.354 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.354 [2024-07-11 07:08:14.388045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.354 [2024-07-11 07:08:14.388075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.354 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.354 [2024-07-11 07:08:14.404313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.354 [2024-07-11 07:08:14.404343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.354 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.421479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.421518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.437985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.438027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.454237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.454268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.470382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.470412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.487033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.487063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.503273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.503304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.519689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.519719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.536735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.536777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.552255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.552296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.569304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.569334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.586231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.586273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.602277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.602335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.618737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.618768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.635115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.635146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.652190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.652233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.613 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.613 [2024-07-11 07:08:14.668122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.613 [2024-07-11 07:08:14.668154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.685161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.685191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.701672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.701711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.718527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.718557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.734717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.734747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.750642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.750673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.767290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.767320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.783860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.783891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.800252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.800282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.816755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.816798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.833429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.833468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.850114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.850143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.866612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.866641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.882520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.882550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.898527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.898556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.913270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.913301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.873 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.873 [2024-07-11 07:08:14.930338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.873 [2024-07-11 07:08:14.930368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.132 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.132 [2024-07-11 07:08:14.946324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.132 [2024-07-11 07:08:14.946365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.132 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.132 [2024-07-11 07:08:14.962823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.132 [2024-07-11 07:08:14.962854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.132 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.132 [2024-07-11 07:08:14.979403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.132 [2024-07-11 07:08:14.979434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.132 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.132 [2024-07-11 07:08:14.996285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.132 [2024-07-11 07:08:14.996315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.132 2024/07/11 07:08:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.133 [2024-07-11 07:08:15.011631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.133 [2024-07-11 07:08:15.011674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.133 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.133 [2024-07-11 07:08:15.028281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.133 [2024-07-11 07:08:15.028311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.133 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.133 [2024-07-11 07:08:15.043827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.133 [2024-07-11 07:08:15.043857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.133 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.133 [2024-07-11 07:08:15.058322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.133 [2024-07-11 07:08:15.058351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.133 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.133 [2024-07-11 07:08:15.075370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.133 [2024-07-11 07:08:15.075401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.133 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.133 [2024-07-11 07:08:15.091285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.133 [2024-07-11 07:08:15.091315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.133 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.133 [2024-07-11 07:08:15.108009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.133 [2024-07-11 07:08:15.108038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.133 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.133 [2024-07-11 07:08:15.124440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.133 [2024-07-11 07:08:15.124480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.133 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.133 [2024-07-11 07:08:15.141039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.133 [2024-07-11 07:08:15.141069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.133 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.133 [2024-07-11 07:08:15.156437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.133 [2024-07-11 07:08:15.156475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.133 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.133 [2024-07-11 07:08:15.168058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.133 [2024-07-11 07:08:15.168088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.133 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.133 [2024-07-11 07:08:15.184395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.133 [2024-07-11 07:08:15.184425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.133 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.391 [2024-07-11 07:08:15.200612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.391 [2024-07-11 07:08:15.200643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.391 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.391 [2024-07-11 07:08:15.217621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.391 [2024-07-11 07:08:15.217650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.391 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.391 [2024-07-11 07:08:15.233707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.391 [2024-07-11 07:08:15.233737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.391 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.391 [2024-07-11 07:08:15.250639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.391 [2024-07-11 07:08:15.250676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.391 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.391 [2024-07-11 07:08:15.267420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.391 [2024-07-11 07:08:15.267460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.391 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.391 [2024-07-11 07:08:15.283954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.391 [2024-07-11 07:08:15.283985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.391 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.391 [2024-07-11 07:08:15.300351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.391 [2024-07-11 07:08:15.300380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.391 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.391 [2024-07-11 07:08:15.317116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.391 [2024-07-11 07:08:15.317145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.391 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.391 [2024-07-11 07:08:15.333813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.391 [2024-07-11 07:08:15.333843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.391 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.392 [2024-07-11 07:08:15.350492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.392 [2024-07-11 07:08:15.350521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.392 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.392 [2024-07-11 07:08:15.367332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.392 [2024-07-11 07:08:15.367362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.392 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.392 [2024-07-11 07:08:15.383702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.392 [2024-07-11 07:08:15.383732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.392 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.392 [2024-07-11 07:08:15.399881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.392 [2024-07-11 07:08:15.399912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.392 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.392 [2024-07-11 07:08:15.416852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.392 [2024-07-11 07:08:15.416881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.392 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.392 [2024-07-11 07:08:15.433459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.392 [2024-07-11 07:08:15.433488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.392 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.392 [2024-07-11 07:08:15.449954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.392 [2024-07-11 07:08:15.449984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.650 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.650 [2024-07-11 07:08:15.465992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.650 [2024-07-11 07:08:15.466023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.650 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.650 [2024-07-11 07:08:15.482134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.650 [2024-07-11 07:08:15.482163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.650 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.650 [2024-07-11 07:08:15.498179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.650 [2024-07-11 07:08:15.498210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.650 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.650 [2024-07-11 07:08:15.514504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.650 [2024-07-11 07:08:15.514533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.650 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.650 [2024-07-11 07:08:15.525933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.650 [2024-07-11 07:08:15.525963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.650 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.650 [2024-07-11 07:08:15.541380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.650 [2024-07-11 07:08:15.541411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.650 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.650 [2024-07-11 07:08:15.558076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.650 [2024-07-11 07:08:15.558106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.650 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.650 [2024-07-11 07:08:15.574520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.650 [2024-07-11 07:08:15.574550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.650 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.650 [2024-07-11 07:08:15.590629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.650 [2024-07-11 07:08:15.590659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.650 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.650 [2024-07-11 07:08:15.606830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.650 [2024-07-11 07:08:15.606860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.650 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.650 [2024-07-11 07:08:15.618626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.650 [2024-07-11 07:08:15.618655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.651 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.651 [2024-07-11 07:08:15.635095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.651 [2024-07-11 07:08:15.635125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.651 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.651 [2024-07-11 07:08:15.651280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.651 [2024-07-11 07:08:15.651311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.651 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.651 [2024-07-11 07:08:15.668141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.651 [2024-07-11 07:08:15.668171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.651 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.651 [2024-07-11 07:08:15.684536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.651 [2024-07-11 07:08:15.684565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.651 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.651 [2024-07-11 07:08:15.700638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.651 [2024-07-11 07:08:15.700668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.651 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.717659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.717690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.728436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.728497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.744840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.744870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.755188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.755231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.770877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.770908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.786985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.787023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.803961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.803993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.819588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.819620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.830402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.830434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.846720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.846764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.862822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.862854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.879273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.879305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.889787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.889820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.905501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.905532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.922074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.922107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.938559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.938590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.910 [2024-07-11 07:08:15.954923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.910 [2024-07-11 07:08:15.954956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.910 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:15.971199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:15.971230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:15.987697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:15.987729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:15.998341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:15.998373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:16.013565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:16.013597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:16.030483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:16.030513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:16.047159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:16.047191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:16.058576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:16.058606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:16.074687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:16.074718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:16.090892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:16.090924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:16.107540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:16.107571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:16.124247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:16.124279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:16.140626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:16.140657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:16.156511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:16.156542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:16.173561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:16.173592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.169 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.169 [2024-07-11 07:08:16.189206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.169 [2024-07-11 07:08:16.189238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.170 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.170 [2024-07-11 07:08:16.200093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.170 [2024-07-11 07:08:16.200124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.170 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.170 [2024-07-11 07:08:16.215495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.170 [2024-07-11 07:08:16.215525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.170 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.428 [2024-07-11 07:08:16.232058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.232089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.248699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.248731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.259315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.259347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.275212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.275244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.291787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.291825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.308219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.308251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.318865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.318896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.334660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.334691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.350757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.350788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.367439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.367482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.377984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.378016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.393999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.394031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.404294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.404325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.419960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.419992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.436543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.436573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.452217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.452250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.469528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.469558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.429 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.429 [2024-07-11 07:08:16.486366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.429 [2024-07-11 07:08:16.486398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.503013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.503046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.519416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.519458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.529686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.529719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.545877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.545909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.556068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.556099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.572189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.572221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.589220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.589252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.605705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.605736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.622745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.622777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.638652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.638684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.649039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.649069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.664598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.664629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.681489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.681520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.697565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.697595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.714140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.714171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.731135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.731167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.688 [2024-07-11 07:08:16.741464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.688 [2024-07-11 07:08:16.741495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.688 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.757639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.757671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.774114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.774146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.790640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.790671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.801100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.801131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.817789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.817820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.832322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.832353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.847506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.847537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.864352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.864384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.881051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.881083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.897792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.897825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.914141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.914173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.930969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.931001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.947827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.947860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.958416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.958461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.973962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.973993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.947 [2024-07-11 07:08:16.990190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.947 [2024-07-11 07:08:16.990223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.947 2024/07/11 07:08:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.206 [2024-07-11 07:08:17.007010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.206 [2024-07-11 07:08:17.007043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.206 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.206 [2024-07-11 07:08:17.018055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.206 [2024-07-11 07:08:17.018087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.206 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.206 [2024-07-11 07:08:17.034726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.206 [2024-07-11 07:08:17.034758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.206 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.206 [2024-07-11 07:08:17.045345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.206 [2024-07-11 07:08:17.045377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.206 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.206 [2024-07-11 07:08:17.061220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.206 [2024-07-11 07:08:17.061252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.206 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.206 [2024-07-11 07:08:17.077518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.206 [2024-07-11 07:08:17.077548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.206 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.206 [2024-07-11 07:08:17.094787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.206 [2024-07-11 07:08:17.094820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.206 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.206 [2024-07-11 07:08:17.111058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.206 [2024-07-11 07:08:17.111089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.206 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.206 [2024-07-11 07:08:17.127563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.206 [2024-07-11 07:08:17.127594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.206 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.206 [2024-07-11 07:08:17.144082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.206 [2024-07-11 07:08:17.144114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.207 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.207 [2024-07-11 07:08:17.160909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.207 [2024-07-11 07:08:17.160940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.207 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.207 [2024-07-11 07:08:17.177039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.207 [2024-07-11 07:08:17.177071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.207 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.207 [2024-07-11 07:08:17.193570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.207 [2024-07-11 07:08:17.193600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.207 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.207 [2024-07-11 07:08:17.209467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.207 [2024-07-11 07:08:17.209498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.207 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.207 [2024-07-11 07:08:17.226266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.207 [2024-07-11 07:08:17.226308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.207 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.207 [2024-07-11 07:08:17.243265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.207 [2024-07-11 07:08:17.243297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.207 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.207 [2024-07-11 07:08:17.259334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.207 [2024-07-11 07:08:17.259365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.207 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.465 [2024-07-11 07:08:17.276435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.465 [2024-07-11 07:08:17.276476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.465 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.465 [2024-07-11 07:08:17.287289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.465 [2024-07-11 07:08:17.287321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.303355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.303399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.319385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.319416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.335854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.335886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.352378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.352409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.369163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.369195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.385971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.386003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.397194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.397226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.407847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.407880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.415522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.415552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.431264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.431296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.441781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.441811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.457750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.457780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.474057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.474088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.489719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.489749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.506713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.506745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.466 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.466 [2024-07-11 07:08:17.523236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.466 [2024-07-11 07:08:17.523268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.533680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.533712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.550155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.550187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.565706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.565738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.582009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.582042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.594029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.594061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.610156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.610187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.619471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.619502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.632957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.632988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.640501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.640531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.656466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.656496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.668149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.668180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.684918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.684949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.700995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.701025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.711604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.711633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.725 [2024-07-11 07:08:17.727297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.725 [2024-07-11 07:08:17.727329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.725 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.726 [2024-07-11 07:08:17.743742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.726 [2024-07-11 07:08:17.743774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.726 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.726 [2024-07-11 07:08:17.759742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.726 [2024-07-11 07:08:17.759772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.726 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.726 [2024-07-11 07:08:17.776372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.726 [2024-07-11 07:08:17.776403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.726 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.984 [2024-07-11 07:08:17.792628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.984 [2024-07-11 07:08:17.792660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.984 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.984 [2024-07-11 07:08:17.809775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.984 [2024-07-11 07:08:17.809807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.984 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.984 [2024-07-11 07:08:17.825857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.984 [2024-07-11 07:08:17.825889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.984 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.984 [2024-07-11 07:08:17.842158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.984 [2024-07-11 07:08:17.842189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.984 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.984 [2024-07-11 07:08:17.858986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.984 [2024-07-11 07:08:17.859018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.984 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.984 [2024-07-11 07:08:17.874845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.984 [2024-07-11 07:08:17.874876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.984 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.984 [2024-07-11 07:08:17.891520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.984 [2024-07-11 07:08:17.891550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.984 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.984 [2024-07-11 07:08:17.902097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.984 [2024-07-11 07:08:17.902128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.984 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.984 [2024-07-11 07:08:17.918631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.984 [2024-07-11 07:08:17.918662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.984 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.984 [2024-07-11 07:08:17.934992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.984 [2024-07-11 07:08:17.935023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.984 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.984 [2024-07-11 07:08:17.951141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.984 [2024-07-11 07:08:17.951172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.984 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.985 [2024-07-11 07:08:17.967325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.985 [2024-07-11 07:08:17.967356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.985 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.985 [2024-07-11 07:08:17.979214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.985 [2024-07-11 07:08:17.979246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.985 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.985 [2024-07-11 07:08:17.990694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.985 [2024-07-11 07:08:17.990726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.985 2024/07/11 07:08:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.985 [2024-07-11 07:08:18.007030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.985 [2024-07-11 07:08:18.007063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.985 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.985 [2024-07-11 07:08:18.023578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.985 [2024-07-11 07:08:18.023609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.985 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.985 [2024-07-11 07:08:18.040468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.985 [2024-07-11 07:08:18.040505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.243 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.243 [2024-07-11 07:08:18.056756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.243 [2024-07-11 07:08:18.056787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.243 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.243 [2024-07-11 07:08:18.073141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.073184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.089708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.089740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.106592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.106623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.122372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.122405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.139280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.139312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.149312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.149344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.165950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.165981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.176556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.176587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.192279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.192312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.208523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.208553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.225253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.225284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.241630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.241662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.258275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.258316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.275002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.275033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.244 [2024-07-11 07:08:18.292235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.244 [2024-07-11 07:08:18.292267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.244 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.303209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.303241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.319609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.319641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.336139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.336170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.352519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.352550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.369384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.369416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.385972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.386004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.402309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.402343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.412649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.412682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.428650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.428681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.445123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.445154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.461837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.461869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.478070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.478102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.494600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.494634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.505344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.505376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.503 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.503 [2024-07-11 07:08:18.520763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.503 [2024-07-11 07:08:18.520794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.504 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.504 [2024-07-11 07:08:18.537329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.504 [2024-07-11 07:08:18.537360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.504 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.504 [2024-07-11 07:08:18.554019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.504 [2024-07-11 07:08:18.554051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.504 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.762 [2024-07-11 07:08:18.571025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.762 [2024-07-11 07:08:18.571057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.762 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.762 [2024-07-11 07:08:18.581866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.762 [2024-07-11 07:08:18.581898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.762 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.762 [2024-07-11 07:08:18.598031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.762 [2024-07-11 07:08:18.598062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.762 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.762 [2024-07-11 07:08:18.614298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.762 [2024-07-11 07:08:18.614328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.762 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.762 [2024-07-11 07:08:18.631117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.762 [2024-07-11 07:08:18.631149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.762 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.762 [2024-07-11 07:08:18.647284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.762 [2024-07-11 07:08:18.647316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.763 [2024-07-11 07:08:18.657990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.763 [2024-07-11 07:08:18.658022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.763 [2024-07-11 07:08:18.674839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.763 [2024-07-11 07:08:18.674870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.763 [2024-07-11 07:08:18.685466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.763 [2024-07-11 07:08:18.685495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.763 [2024-07-11 07:08:18.701532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.763 [2024-07-11 07:08:18.701562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.763 [2024-07-11 07:08:18.712076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.763 [2024-07-11 07:08:18.712107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.763 [2024-07-11 07:08:18.720407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.763 [2024-07-11 07:08:18.720438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.763 [2024-07-11 07:08:18.731285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.763 [2024-07-11 07:08:18.731316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.763 [2024-07-11 07:08:18.742800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.763 [2024-07-11 07:08:18.742828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.763 [2024-07-11 07:08:18.750407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.763 [2024-07-11 07:08:18.750432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.763 [2024-07-11 07:08:18.765837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.763 [2024-07-11 07:08:18.765864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.763 [2024-07-11 07:08:18.780944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.763 [2024-07-11 07:08:18.780971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.763 [2024-07-11 07:08:18.797988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.763 [2024-07-11 07:08:18.798015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.763 [2024-07-11 07:08:18.813659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.763 [2024-07-11 07:08:18.813686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.763 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:18.830485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:18.830511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:18.841462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:18.841488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:18.856881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:18.856909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:18.873235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:18.873262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:18.890496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:18.890523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:18.906925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:18.906952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:18.923829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:18.923856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:18.939210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:18.939237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:18.955988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:18.956015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:18.966735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:18.966762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:18.981843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:18.981870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:18.992374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:18.992400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:19.009046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:19.009072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:19.025630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:19.025657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:19.042145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:19.042172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:19.058860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:19.058886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.022 [2024-07-11 07:08:19.075369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.022 [2024-07-11 07:08:19.075397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.022 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.091721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.091749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.108392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.108419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.117585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.117611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 00:15:35.282 Latency(us) 00:15:35.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.282 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:35.282 Nvme1n1 : 5.01 14193.31 110.89 0.00 0.00 9007.61 3842.79 19065.02 00:15:35.282 =================================================================================================================== 00:15:35.282 Total : 14193.31 110.89 0.00 0.00 9007.61 3842.79 19065.02 00:15:35.282 [2024-07-11 07:08:19.127607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.127645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.135601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.135624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.147600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.147622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.159604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.159625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.171603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.171624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.183613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.183635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.191607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.191632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.203612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.203634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.211602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.211622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.223616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.223638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.235620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.235640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.247620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.247640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.259622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.259642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.271632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.271654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.283628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.283649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.295631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.295651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.282 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.282 [2024-07-11 07:08:19.307633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.282 [2024-07-11 07:08:19.307653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.283 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.283 [2024-07-11 07:08:19.319635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.283 [2024-07-11 07:08:19.319656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.283 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.283 [2024-07-11 07:08:19.331637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.283 [2024-07-11 07:08:19.331657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.283 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.541 [2024-07-11 07:08:19.343647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.541 [2024-07-11 07:08:19.343670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.541 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.541 [2024-07-11 07:08:19.355645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.541 [2024-07-11 07:08:19.355666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.541 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.541 [2024-07-11 07:08:19.367648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.542 [2024-07-11 07:08:19.367669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.542 2024/07/11 07:08:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.542 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75046) - No such process 00:15:35.542 07:08:19 -- target/zcopy.sh@49 -- # wait 75046 00:15:35.542 07:08:19 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.542 07:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.542 07:08:19 -- common/autotest_common.sh@10 -- # set +x 00:15:35.542 07:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.542 07:08:19 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:35.542 07:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.542 07:08:19 -- common/autotest_common.sh@10 -- # set +x 00:15:35.542 delay0 00:15:35.542 07:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.542 07:08:19 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:35.542 07:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.542 07:08:19 -- common/autotest_common.sh@10 -- # set +x 00:15:35.542 07:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.542 07:08:19 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:35.542 [2024-07-11 07:08:19.573151] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:42.100 Initializing NVMe Controllers 00:15:42.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:42.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:42.100 Initialization complete. Launching workers. 00:15:42.100 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 57 00:15:42.100 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 344, failed to submit 33 00:15:42.100 success 152, unsuccess 192, failed 0 00:15:42.100 07:08:25 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:42.100 07:08:25 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:42.100 07:08:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:42.100 07:08:25 -- nvmf/common.sh@116 -- # sync 00:15:42.100 07:08:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:42.100 07:08:25 -- nvmf/common.sh@119 -- # set +e 00:15:42.100 07:08:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:42.100 07:08:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:42.100 rmmod nvme_tcp 00:15:42.100 rmmod nvme_fabrics 00:15:42.100 rmmod nvme_keyring 00:15:42.100 07:08:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:42.100 07:08:25 -- nvmf/common.sh@123 -- # set -e 00:15:42.100 07:08:25 -- nvmf/common.sh@124 -- # return 0 00:15:42.100 07:08:25 -- nvmf/common.sh@477 -- # '[' -n 74872 ']' 00:15:42.100 07:08:25 -- nvmf/common.sh@478 -- # killprocess 74872 00:15:42.100 07:08:25 -- common/autotest_common.sh@926 -- # '[' -z 74872 ']' 00:15:42.100 07:08:25 -- common/autotest_common.sh@930 -- # kill -0 74872 00:15:42.100 07:08:25 -- common/autotest_common.sh@931 -- # uname 00:15:42.100 07:08:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:42.100 07:08:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74872 00:15:42.100 killing process with pid 74872 00:15:42.100 07:08:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:42.100 07:08:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:42.100 07:08:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74872' 00:15:42.100 07:08:25 -- common/autotest_common.sh@945 -- # kill 74872 00:15:42.100 07:08:25 -- common/autotest_common.sh@950 -- # wait 74872 00:15:42.100 07:08:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:42.100 07:08:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:42.100 07:08:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:42.100 07:08:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:42.100 07:08:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:42.100 07:08:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.100 07:08:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.100 07:08:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.100 07:08:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:42.100 00:15:42.100 real 0m24.553s 00:15:42.100 user 0m38.345s 00:15:42.100 sys 0m7.429s 00:15:42.100 07:08:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.100 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:15:42.100 ************************************ 00:15:42.100 END TEST nvmf_zcopy 00:15:42.100 ************************************ 00:15:42.100 07:08:26 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:42.100 07:08:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:42.100 07:08:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:42.100 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:15:42.100 ************************************ 00:15:42.100 START TEST nvmf_nmic 00:15:42.100 ************************************ 00:15:42.100 07:08:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:42.100 * Looking for test storage... 00:15:42.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:42.100 07:08:26 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:42.100 07:08:26 -- nvmf/common.sh@7 -- # uname -s 00:15:42.100 07:08:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.100 07:08:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.100 07:08:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.100 07:08:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.100 07:08:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.100 07:08:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.100 07:08:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.100 07:08:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.100 07:08:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.100 07:08:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.359 07:08:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:15:42.359 07:08:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:15:42.359 07:08:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.359 07:08:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.359 07:08:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:42.359 07:08:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.359 07:08:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.359 07:08:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.359 07:08:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.359 07:08:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.359 07:08:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.359 07:08:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.359 07:08:26 -- paths/export.sh@5 -- # export PATH 00:15:42.359 07:08:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.359 07:08:26 -- nvmf/common.sh@46 -- # : 0 00:15:42.359 07:08:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:42.359 07:08:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:42.359 07:08:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:42.359 07:08:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.360 07:08:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.360 07:08:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:42.360 07:08:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:42.360 07:08:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:42.360 07:08:26 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:42.360 07:08:26 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:42.360 07:08:26 -- target/nmic.sh@14 -- # nvmftestinit 00:15:42.360 07:08:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:42.360 07:08:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.360 07:08:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:42.360 07:08:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:42.360 07:08:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:42.360 07:08:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.360 07:08:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.360 07:08:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.360 07:08:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:42.360 07:08:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:42.360 07:08:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:42.360 07:08:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:42.360 07:08:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:42.360 07:08:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:42.360 07:08:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.360 07:08:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.360 07:08:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:42.360 07:08:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:42.360 07:08:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:42.360 07:08:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:42.360 07:08:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:42.360 07:08:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.360 07:08:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:42.360 07:08:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:42.360 07:08:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:42.360 07:08:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:42.360 07:08:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:42.360 07:08:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:42.360 Cannot find device "nvmf_tgt_br" 00:15:42.360 07:08:26 -- nvmf/common.sh@154 -- # true 00:15:42.360 07:08:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.360 Cannot find device "nvmf_tgt_br2" 00:15:42.360 07:08:26 -- nvmf/common.sh@155 -- # true 00:15:42.360 07:08:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:42.360 07:08:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:42.360 Cannot find device "nvmf_tgt_br" 00:15:42.360 07:08:26 -- nvmf/common.sh@157 -- # true 00:15:42.360 07:08:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:42.360 Cannot find device "nvmf_tgt_br2" 00:15:42.360 07:08:26 -- nvmf/common.sh@158 -- # true 00:15:42.360 07:08:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:42.360 07:08:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:42.360 07:08:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.360 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.360 07:08:26 -- nvmf/common.sh@161 -- # true 00:15:42.360 07:08:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.360 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.360 07:08:26 -- nvmf/common.sh@162 -- # true 00:15:42.360 07:08:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:42.360 07:08:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:42.360 07:08:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:42.360 07:08:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:42.360 07:08:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:42.360 07:08:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:42.360 07:08:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:42.360 07:08:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:42.360 07:08:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:42.360 07:08:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:42.360 07:08:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:42.360 07:08:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:42.360 07:08:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:42.360 07:08:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:42.360 07:08:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:42.619 07:08:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:42.619 07:08:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:42.619 07:08:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:42.619 07:08:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:42.619 07:08:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:42.619 07:08:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:42.619 07:08:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:42.619 07:08:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:42.619 07:08:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:42.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:15:42.619 00:15:42.619 --- 10.0.0.2 ping statistics --- 00:15:42.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.619 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:42.619 07:08:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:42.619 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:42.619 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:15:42.619 00:15:42.619 --- 10.0.0.3 ping statistics --- 00:15:42.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.619 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:42.619 07:08:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:42.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:15:42.619 00:15:42.619 --- 10.0.0.1 ping statistics --- 00:15:42.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.619 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:42.619 07:08:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.619 07:08:26 -- nvmf/common.sh@421 -- # return 0 00:15:42.619 07:08:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:42.619 07:08:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.619 07:08:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:42.619 07:08:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:42.619 07:08:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.619 07:08:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:42.619 07:08:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:42.619 07:08:26 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:42.619 07:08:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:42.619 07:08:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:42.619 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:15:42.619 07:08:26 -- nvmf/common.sh@469 -- # nvmfpid=75366 00:15:42.619 07:08:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:42.619 07:08:26 -- nvmf/common.sh@470 -- # waitforlisten 75366 00:15:42.619 07:08:26 -- common/autotest_common.sh@819 -- # '[' -z 75366 ']' 00:15:42.619 07:08:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.619 07:08:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:42.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.619 07:08:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.619 07:08:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:42.619 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:15:42.619 [2024-07-11 07:08:26.579684] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:42.619 [2024-07-11 07:08:26.579782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.878 [2024-07-11 07:08:26.715985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.878 [2024-07-11 07:08:26.790545] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:42.878 [2024-07-11 07:08:26.790705] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.878 [2024-07-11 07:08:26.790716] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.878 [2024-07-11 07:08:26.790724] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.878 [2024-07-11 07:08:26.790882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.878 [2024-07-11 07:08:26.791303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.878 [2024-07-11 07:08:26.791404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.878 [2024-07-11 07:08:26.791405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.445 07:08:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:43.445 07:08:27 -- common/autotest_common.sh@852 -- # return 0 00:15:43.445 07:08:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:43.445 07:08:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:43.445 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:15:43.445 07:08:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.445 07:08:27 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.445 07:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.445 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:15:43.704 [2024-07-11 07:08:27.506534] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.704 07:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.704 07:08:27 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:43.704 07:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.704 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:15:43.704 Malloc0 00:15:43.705 07:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.705 07:08:27 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:43.705 07:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.705 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:15:43.705 07:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.705 07:08:27 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:43.705 07:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.705 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:15:43.705 07:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.705 07:08:27 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.705 07:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.705 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:15:43.705 [2024-07-11 07:08:27.567898] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.705 07:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.705 test case1: single bdev can't be used in multiple subsystems 00:15:43.705 07:08:27 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:43.705 07:08:27 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:43.705 07:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.705 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:15:43.705 07:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.705 07:08:27 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:43.705 07:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.705 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:15:43.705 07:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.705 07:08:27 -- target/nmic.sh@28 -- # nmic_status=0 00:15:43.705 07:08:27 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:43.705 07:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.705 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:15:43.705 [2024-07-11 07:08:27.591501] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:43.705 [2024-07-11 07:08:27.591541] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:43.705 [2024-07-11 07:08:27.591556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/07/11 07:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 request: 00:15:43.705 { 00:15:43.705 "method": "nvmf_subsystem_add_ns", 00:15:43.705 "params": { 00:15:43.705 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:43.705 "namespace": { 00:15:43.705 "bdev_name": "Malloc0" 00:15:43.705 } 00:15:43.705 } 00:15:43.705 } 00:15:43.705 Got JSON-RPC error response 00:15:43.705 GoRPCClient: error on JSON-RPC call 00:15:43.705 07:08:27 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:43.705 07:08:27 -- target/nmic.sh@29 -- # nmic_status=1 00:15:43.705 07:08:27 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:43.705 Adding namespace failed - expected result. 00:15:43.705 07:08:27 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:43.705 test case2: host connect to nvmf target in multiple paths 00:15:43.705 07:08:27 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:43.705 07:08:27 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:43.705 07:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.705 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:15:43.705 [2024-07-11 07:08:27.603630] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:43.705 07:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.705 07:08:27 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:43.963 07:08:27 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:43.963 07:08:27 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:43.963 07:08:27 -- common/autotest_common.sh@1177 -- # local i=0 00:15:43.963 07:08:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:43.963 07:08:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:43.963 07:08:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:46.494 07:08:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:46.494 07:08:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:46.494 07:08:29 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:46.494 07:08:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:46.494 07:08:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:46.494 07:08:29 -- common/autotest_common.sh@1187 -- # return 0 00:15:46.494 07:08:29 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:46.494 [global] 00:15:46.494 thread=1 00:15:46.494 invalidate=1 00:15:46.494 rw=write 00:15:46.494 time_based=1 00:15:46.494 runtime=1 00:15:46.494 ioengine=libaio 00:15:46.494 direct=1 00:15:46.494 bs=4096 00:15:46.494 iodepth=1 00:15:46.494 norandommap=0 00:15:46.494 numjobs=1 00:15:46.494 00:15:46.494 verify_dump=1 00:15:46.494 verify_backlog=512 00:15:46.494 verify_state_save=0 00:15:46.494 do_verify=1 00:15:46.494 verify=crc32c-intel 00:15:46.494 [job0] 00:15:46.494 filename=/dev/nvme0n1 00:15:46.494 Could not set queue depth (nvme0n1) 00:15:46.494 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.494 fio-3.35 00:15:46.494 Starting 1 thread 00:15:47.430 00:15:47.430 job0: (groupid=0, jobs=1): err= 0: pid=75470: Thu Jul 11 07:08:31 2024 00:15:47.430 read: IOPS=3154, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec) 00:15:47.430 slat (nsec): min=11539, max=59573, avg=14803.57, stdev=4666.50 00:15:47.430 clat (usec): min=115, max=375, avg=150.31, stdev=18.69 00:15:47.430 lat (usec): min=128, max=399, avg=165.12, stdev=19.68 00:15:47.430 clat percentiles (usec): 00:15:47.430 | 1.00th=[ 123], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 137], 00:15:47.430 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:15:47.430 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 176], 95.00th=[ 184], 00:15:47.430 | 99.00th=[ 204], 99.50th=[ 215], 99.90th=[ 239], 99.95th=[ 281], 00:15:47.430 | 99.99th=[ 375] 00:15:47.430 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:15:47.430 slat (usec): min=17, max=106, avg=22.48, stdev= 7.18 00:15:47.430 clat (usec): min=81, max=297, avg=108.05, stdev=16.43 00:15:47.430 lat (usec): min=99, max=316, avg=130.53, stdev=18.79 00:15:47.430 clat percentiles (usec): 00:15:47.430 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 95], 00:15:47.430 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 103], 60.00th=[ 108], 00:15:47.430 | 70.00th=[ 113], 80.00th=[ 121], 90.00th=[ 133], 95.00th=[ 141], 00:15:47.430 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 188], 99.95th=[ 204], 00:15:47.430 | 99.99th=[ 297] 00:15:47.430 bw ( KiB/s): min=14112, max=14112, per=98.54%, avg=14112.00, stdev= 0.00, samples=1 00:15:47.430 iops : min= 3528, max= 3528, avg=3528.00, stdev= 0.00, samples=1 00:15:47.430 lat (usec) : 100=20.91%, 250=79.03%, 500=0.06% 00:15:47.430 cpu : usr=2.00%, sys=9.70%, ctx=6742, majf=0, minf=2 00:15:47.430 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.430 issued rwts: total=3158,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.430 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.430 00:15:47.430 Run status group 0 (all jobs): 00:15:47.430 READ: bw=12.3MiB/s (12.9MB/s), 12.3MiB/s-12.3MiB/s (12.9MB/s-12.9MB/s), io=12.3MiB (12.9MB), run=1001-1001msec 00:15:47.430 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:15:47.430 00:15:47.430 Disk stats (read/write): 00:15:47.430 nvme0n1: ios=2988/3072, merge=0/0, ticks=462/371, in_queue=833, util=91.28% 00:15:47.431 07:08:31 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:47.431 07:08:31 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:47.431 07:08:31 -- common/autotest_common.sh@1198 -- # local i=0 00:15:47.431 07:08:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:47.431 07:08:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.431 07:08:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:47.431 07:08:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.431 07:08:31 -- common/autotest_common.sh@1210 -- # return 0 00:15:47.431 07:08:31 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:47.431 07:08:31 -- target/nmic.sh@53 -- # nvmftestfini 00:15:47.431 07:08:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:47.431 07:08:31 -- nvmf/common.sh@116 -- # sync 00:15:47.431 07:08:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:47.431 07:08:31 -- nvmf/common.sh@119 -- # set +e 00:15:47.431 07:08:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:47.431 07:08:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:47.431 rmmod nvme_tcp 00:15:47.689 rmmod nvme_fabrics 00:15:47.689 rmmod nvme_keyring 00:15:47.689 07:08:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:47.689 07:08:31 -- nvmf/common.sh@123 -- # set -e 00:15:47.689 07:08:31 -- nvmf/common.sh@124 -- # return 0 00:15:47.689 07:08:31 -- nvmf/common.sh@477 -- # '[' -n 75366 ']' 00:15:47.689 07:08:31 -- nvmf/common.sh@478 -- # killprocess 75366 00:15:47.689 07:08:31 -- common/autotest_common.sh@926 -- # '[' -z 75366 ']' 00:15:47.689 07:08:31 -- common/autotest_common.sh@930 -- # kill -0 75366 00:15:47.689 07:08:31 -- common/autotest_common.sh@931 -- # uname 00:15:47.689 07:08:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:47.689 07:08:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75366 00:15:47.689 07:08:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:47.689 killing process with pid 75366 00:15:47.689 07:08:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:47.689 07:08:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75366' 00:15:47.689 07:08:31 -- common/autotest_common.sh@945 -- # kill 75366 00:15:47.689 07:08:31 -- common/autotest_common.sh@950 -- # wait 75366 00:15:47.948 07:08:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:47.948 07:08:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:47.948 07:08:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:47.948 07:08:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.948 07:08:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:47.948 07:08:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.948 07:08:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.948 07:08:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.948 07:08:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:47.948 00:15:47.948 real 0m5.857s 00:15:47.948 user 0m19.935s 00:15:47.948 sys 0m1.148s 00:15:47.948 07:08:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.948 07:08:31 -- common/autotest_common.sh@10 -- # set +x 00:15:47.948 ************************************ 00:15:47.948 END TEST nvmf_nmic 00:15:47.948 ************************************ 00:15:47.948 07:08:31 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:47.948 07:08:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:47.948 07:08:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:47.948 07:08:31 -- common/autotest_common.sh@10 -- # set +x 00:15:47.948 ************************************ 00:15:47.948 START TEST nvmf_fio_target 00:15:47.948 ************************************ 00:15:47.948 07:08:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:48.207 * Looking for test storage... 00:15:48.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:48.207 07:08:32 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.207 07:08:32 -- nvmf/common.sh@7 -- # uname -s 00:15:48.207 07:08:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.208 07:08:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.208 07:08:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.208 07:08:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.208 07:08:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.208 07:08:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.208 07:08:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.208 07:08:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.208 07:08:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.208 07:08:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.208 07:08:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:15:48.208 07:08:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:15:48.208 07:08:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.208 07:08:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.208 07:08:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.208 07:08:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.208 07:08:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.208 07:08:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.208 07:08:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.208 07:08:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.208 07:08:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.208 07:08:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.208 07:08:32 -- paths/export.sh@5 -- # export PATH 00:15:48.208 07:08:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.208 07:08:32 -- nvmf/common.sh@46 -- # : 0 00:15:48.208 07:08:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:48.208 07:08:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:48.208 07:08:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:48.208 07:08:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.208 07:08:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.208 07:08:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:48.208 07:08:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:48.208 07:08:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:48.208 07:08:32 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.208 07:08:32 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:48.208 07:08:32 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:48.208 07:08:32 -- target/fio.sh@16 -- # nvmftestinit 00:15:48.208 07:08:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:48.208 07:08:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.208 07:08:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:48.208 07:08:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:48.208 07:08:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:48.208 07:08:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.208 07:08:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.208 07:08:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.208 07:08:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:48.208 07:08:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:48.208 07:08:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:48.208 07:08:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:48.208 07:08:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:48.208 07:08:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:48.208 07:08:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.208 07:08:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.208 07:08:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:48.208 07:08:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:48.208 07:08:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.208 07:08:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.208 07:08:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.208 07:08:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.208 07:08:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.208 07:08:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.208 07:08:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.208 07:08:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.208 07:08:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:48.208 07:08:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:48.208 Cannot find device "nvmf_tgt_br" 00:15:48.208 07:08:32 -- nvmf/common.sh@154 -- # true 00:15:48.208 07:08:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.208 Cannot find device "nvmf_tgt_br2" 00:15:48.208 07:08:32 -- nvmf/common.sh@155 -- # true 00:15:48.208 07:08:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:48.208 07:08:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:48.208 Cannot find device "nvmf_tgt_br" 00:15:48.208 07:08:32 -- nvmf/common.sh@157 -- # true 00:15:48.208 07:08:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:48.208 Cannot find device "nvmf_tgt_br2" 00:15:48.208 07:08:32 -- nvmf/common.sh@158 -- # true 00:15:48.208 07:08:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:48.208 07:08:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:48.208 07:08:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.208 07:08:32 -- nvmf/common.sh@161 -- # true 00:15:48.208 07:08:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.208 07:08:32 -- nvmf/common.sh@162 -- # true 00:15:48.208 07:08:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.208 07:08:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.208 07:08:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.208 07:08:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.467 07:08:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.467 07:08:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.467 07:08:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.467 07:08:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:48.467 07:08:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:48.467 07:08:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:48.467 07:08:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:48.467 07:08:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:48.467 07:08:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:48.467 07:08:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.467 07:08:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.467 07:08:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.467 07:08:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:48.467 07:08:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:48.467 07:08:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.467 07:08:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.467 07:08:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.467 07:08:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.467 07:08:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.467 07:08:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:48.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:15:48.467 00:15:48.467 --- 10.0.0.2 ping statistics --- 00:15:48.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.467 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:48.467 07:08:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:48.467 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.467 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:15:48.467 00:15:48.467 --- 10.0.0.3 ping statistics --- 00:15:48.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.467 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:48.467 07:08:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:15:48.467 00:15:48.467 --- 10.0.0.1 ping statistics --- 00:15:48.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.467 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:48.467 07:08:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.467 07:08:32 -- nvmf/common.sh@421 -- # return 0 00:15:48.467 07:08:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:48.467 07:08:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.467 07:08:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:48.467 07:08:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:48.467 07:08:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.467 07:08:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:48.467 07:08:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:48.467 07:08:32 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:48.467 07:08:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:48.467 07:08:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:48.467 07:08:32 -- common/autotest_common.sh@10 -- # set +x 00:15:48.467 07:08:32 -- nvmf/common.sh@469 -- # nvmfpid=75646 00:15:48.467 07:08:32 -- nvmf/common.sh@470 -- # waitforlisten 75646 00:15:48.467 07:08:32 -- common/autotest_common.sh@819 -- # '[' -z 75646 ']' 00:15:48.467 07:08:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.467 07:08:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:48.467 07:08:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:48.467 07:08:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.467 07:08:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:48.467 07:08:32 -- common/autotest_common.sh@10 -- # set +x 00:15:48.467 [2024-07-11 07:08:32.520670] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:48.467 [2024-07-11 07:08:32.520753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.726 [2024-07-11 07:08:32.657273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:48.726 [2024-07-11 07:08:32.742754] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:48.726 [2024-07-11 07:08:32.742901] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.726 [2024-07-11 07:08:32.742914] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.726 [2024-07-11 07:08:32.742923] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.726 [2024-07-11 07:08:32.743424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.726 [2024-07-11 07:08:32.743549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.726 [2024-07-11 07:08:32.743663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:48.726 [2024-07-11 07:08:32.743671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.668 07:08:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:49.668 07:08:33 -- common/autotest_common.sh@852 -- # return 0 00:15:49.668 07:08:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:49.668 07:08:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:49.668 07:08:33 -- common/autotest_common.sh@10 -- # set +x 00:15:49.668 07:08:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.668 07:08:33 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:49.927 [2024-07-11 07:08:33.726161] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.927 07:08:33 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:50.194 07:08:34 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:50.194 07:08:34 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:50.464 07:08:34 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:50.464 07:08:34 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:50.723 07:08:34 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:50.723 07:08:34 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:50.982 07:08:34 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:50.982 07:08:34 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:51.241 07:08:35 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:51.500 07:08:35 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:51.500 07:08:35 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:51.759 07:08:35 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:51.759 07:08:35 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.018 07:08:35 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:52.018 07:08:35 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:52.277 07:08:36 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:52.277 07:08:36 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:52.277 07:08:36 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:52.536 07:08:36 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:52.536 07:08:36 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:52.794 07:08:36 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.052 [2024-07-11 07:08:36.897758] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.052 07:08:36 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:53.052 07:08:37 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:53.310 07:08:37 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.568 07:08:37 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:53.569 07:08:37 -- common/autotest_common.sh@1177 -- # local i=0 00:15:53.569 07:08:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.569 07:08:37 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:15:53.569 07:08:37 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:15:53.569 07:08:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:55.468 07:08:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:55.468 07:08:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:55.468 07:08:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:55.468 07:08:39 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:15:55.468 07:08:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.468 07:08:39 -- common/autotest_common.sh@1187 -- # return 0 00:15:55.468 07:08:39 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:55.468 [global] 00:15:55.468 thread=1 00:15:55.468 invalidate=1 00:15:55.468 rw=write 00:15:55.468 time_based=1 00:15:55.468 runtime=1 00:15:55.468 ioengine=libaio 00:15:55.468 direct=1 00:15:55.468 bs=4096 00:15:55.468 iodepth=1 00:15:55.468 norandommap=0 00:15:55.468 numjobs=1 00:15:55.468 00:15:55.468 verify_dump=1 00:15:55.468 verify_backlog=512 00:15:55.468 verify_state_save=0 00:15:55.468 do_verify=1 00:15:55.468 verify=crc32c-intel 00:15:55.468 [job0] 00:15:55.468 filename=/dev/nvme0n1 00:15:55.468 [job1] 00:15:55.468 filename=/dev/nvme0n2 00:15:55.468 [job2] 00:15:55.468 filename=/dev/nvme0n3 00:15:55.468 [job3] 00:15:55.468 filename=/dev/nvme0n4 00:15:55.726 Could not set queue depth (nvme0n1) 00:15:55.726 Could not set queue depth (nvme0n2) 00:15:55.726 Could not set queue depth (nvme0n3) 00:15:55.726 Could not set queue depth (nvme0n4) 00:15:55.726 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.726 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.726 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.726 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.726 fio-3.35 00:15:55.726 Starting 4 threads 00:15:57.100 00:15:57.100 job0: (groupid=0, jobs=1): err= 0: pid=75934: Thu Jul 11 07:08:40 2024 00:15:57.100 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:57.100 slat (nsec): min=10183, max=57569, avg=15972.95, stdev=4788.49 00:15:57.100 clat (usec): min=150, max=535, avg=322.16, stdev=83.11 00:15:57.100 lat (usec): min=169, max=550, avg=338.13, stdev=82.17 00:15:57.100 clat percentiles (usec): 00:15:57.100 | 1.00th=[ 172], 5.00th=[ 192], 10.00th=[ 208], 20.00th=[ 243], 00:15:57.100 | 30.00th=[ 273], 40.00th=[ 302], 50.00th=[ 322], 60.00th=[ 343], 00:15:57.100 | 70.00th=[ 363], 80.00th=[ 400], 90.00th=[ 441], 95.00th=[ 461], 00:15:57.100 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 537], 99.95th=[ 537], 00:15:57.100 | 99.99th=[ 537] 00:15:57.100 write: IOPS=1916, BW=7664KiB/s (7848kB/s)(7672KiB/1001msec); 0 zone resets 00:15:57.100 slat (usec): min=10, max=100, avg=21.18, stdev= 7.45 00:15:57.100 clat (usec): min=120, max=428, avg=226.16, stdev=63.00 00:15:57.100 lat (usec): min=143, max=457, avg=247.35, stdev=62.04 00:15:57.100 clat percentiles (usec): 00:15:57.100 | 1.00th=[ 135], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 169], 00:15:57.100 | 30.00th=[ 182], 40.00th=[ 194], 50.00th=[ 206], 60.00th=[ 229], 00:15:57.100 | 70.00th=[ 262], 80.00th=[ 293], 90.00th=[ 322], 95.00th=[ 338], 00:15:57.100 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 429], 99.95th=[ 429], 00:15:57.100 | 99.99th=[ 429] 00:15:57.100 bw ( KiB/s): min= 8192, max= 8192, per=30.38%, avg=8192.00, stdev= 0.00, samples=1 00:15:57.100 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:57.100 lat (usec) : 250=47.65%, 500=51.97%, 750=0.38% 00:15:57.100 cpu : usr=1.60%, sys=4.80%, ctx=3455, majf=0, minf=6 00:15:57.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.100 issued rwts: total=1536,1918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.100 job1: (groupid=0, jobs=1): err= 0: pid=75940: Thu Jul 11 07:08:40 2024 00:15:57.100 read: IOPS=1459, BW=5838KiB/s (5978kB/s)(5844KiB/1001msec) 00:15:57.100 slat (nsec): min=7964, max=61389, avg=15707.93, stdev=5383.82 00:15:57.100 clat (usec): min=187, max=1700, avg=356.18, stdev=83.73 00:15:57.100 lat (usec): min=206, max=1722, avg=371.89, stdev=83.71 00:15:57.100 clat percentiles (usec): 00:15:57.100 | 1.00th=[ 225], 5.00th=[ 255], 10.00th=[ 269], 20.00th=[ 293], 00:15:57.100 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 343], 60.00th=[ 363], 00:15:57.100 | 70.00th=[ 396], 80.00th=[ 424], 90.00th=[ 453], 95.00th=[ 474], 00:15:57.100 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 1549], 99.95th=[ 1696], 00:15:57.100 | 99.99th=[ 1696] 00:15:57.100 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:57.100 slat (usec): min=10, max=101, avg=21.98, stdev= 7.97 00:15:57.100 clat (usec): min=111, max=3264, avg=271.74, stdev=116.07 00:15:57.100 lat (usec): min=141, max=3292, avg=293.72, stdev=117.88 00:15:57.100 clat percentiles (usec): 00:15:57.100 | 1.00th=[ 149], 5.00th=[ 163], 10.00th=[ 176], 20.00th=[ 196], 00:15:57.100 | 30.00th=[ 221], 40.00th=[ 247], 50.00th=[ 273], 60.00th=[ 289], 00:15:57.100 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 359], 95.00th=[ 379], 00:15:57.100 | 99.00th=[ 441], 99.50th=[ 510], 99.90th=[ 2040], 99.95th=[ 3261], 00:15:57.100 | 99.99th=[ 3261] 00:15:57.100 bw ( KiB/s): min= 7888, max= 7888, per=29.25%, avg=7888.00, stdev= 0.00, samples=1 00:15:57.100 iops : min= 1972, max= 1972, avg=1972.00, stdev= 0.00, samples=1 00:15:57.100 lat (usec) : 250=22.59%, 500=76.34%, 750=0.90% 00:15:57.100 lat (msec) : 2=0.10%, 4=0.07% 00:15:57.100 cpu : usr=1.10%, sys=4.50%, ctx=2997, majf=0, minf=11 00:15:57.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.100 issued rwts: total=1461,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.100 job2: (groupid=0, jobs=1): err= 0: pid=75942: Thu Jul 11 07:08:40 2024 00:15:57.100 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:57.100 slat (nsec): min=10778, max=66092, avg=17356.20, stdev=5807.49 00:15:57.100 clat (usec): min=173, max=919, avg=334.47, stdev=78.87 00:15:57.100 lat (usec): min=199, max=940, avg=351.82, stdev=78.23 00:15:57.100 clat percentiles (usec): 00:15:57.100 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 215], 20.00th=[ 269], 00:15:57.100 | 30.00th=[ 306], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 351], 00:15:57.100 | 70.00th=[ 367], 80.00th=[ 404], 90.00th=[ 437], 95.00th=[ 461], 00:15:57.100 | 99.00th=[ 506], 99.50th=[ 515], 99.90th=[ 758], 99.95th=[ 922], 00:15:57.100 | 99.99th=[ 922] 00:15:57.100 write: IOPS=1756, BW=7025KiB/s (7194kB/s)(7032KiB/1001msec); 0 zone resets 00:15:57.100 slat (nsec): min=11164, max=89193, avg=23133.36, stdev=7368.98 00:15:57.100 clat (usec): min=138, max=481, avg=234.93, stdev=62.17 00:15:57.100 lat (usec): min=156, max=501, avg=258.06, stdev=61.05 00:15:57.100 clat percentiles (usec): 00:15:57.100 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 176], 00:15:57.100 | 30.00th=[ 186], 40.00th=[ 200], 50.00th=[ 219], 60.00th=[ 249], 00:15:57.100 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 338], 00:15:57.100 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 396], 99.95th=[ 482], 00:15:57.100 | 99.99th=[ 482] 00:15:57.100 bw ( KiB/s): min= 8192, max= 8192, per=30.38%, avg=8192.00, stdev= 0.00, samples=1 00:15:57.100 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:57.100 lat (usec) : 250=40.89%, 500=58.44%, 750=0.61%, 1000=0.06% 00:15:57.100 cpu : usr=1.50%, sys=4.90%, ctx=3295, majf=0, minf=9 00:15:57.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.100 issued rwts: total=1536,1758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.100 job3: (groupid=0, jobs=1): err= 0: pid=75943: Thu Jul 11 07:08:40 2024 00:15:57.101 read: IOPS=1385, BW=5542KiB/s (5675kB/s)(5548KiB/1001msec) 00:15:57.101 slat (nsec): min=8839, max=62490, avg=16453.32, stdev=5364.31 00:15:57.101 clat (usec): min=180, max=1052, avg=367.94, stdev=59.65 00:15:57.101 lat (usec): min=191, max=1069, avg=384.39, stdev=60.07 00:15:57.101 clat percentiles (usec): 00:15:57.101 | 1.00th=[ 273], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 322], 00:15:57.101 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 367], 00:15:57.101 | 70.00th=[ 392], 80.00th=[ 429], 90.00th=[ 449], 95.00th=[ 469], 00:15:57.101 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 676], 99.95th=[ 1057], 00:15:57.101 | 99.99th=[ 1057] 00:15:57.101 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:57.101 slat (usec): min=11, max=107, avg=23.37, stdev= 7.96 00:15:57.101 clat (usec): min=115, max=3448, avg=277.24, stdev=113.79 00:15:57.101 lat (usec): min=139, max=3472, avg=300.61, stdev=115.02 00:15:57.101 clat percentiles (usec): 00:15:57.101 | 1.00th=[ 153], 5.00th=[ 169], 10.00th=[ 186], 20.00th=[ 212], 00:15:57.101 | 30.00th=[ 239], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 297], 00:15:57.101 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 351], 95.00th=[ 371], 00:15:57.101 | 99.00th=[ 424], 99.50th=[ 449], 99.90th=[ 1926], 99.95th=[ 3458], 00:15:57.101 | 99.99th=[ 3458] 00:15:57.101 bw ( KiB/s): min= 7448, max= 7448, per=27.62%, avg=7448.00, stdev= 0.00, samples=1 00:15:57.101 iops : min= 1862, max= 1862, avg=1862.00, stdev= 0.00, samples=1 00:15:57.101 lat (usec) : 250=18.13%, 500=81.12%, 750=0.62% 00:15:57.101 lat (msec) : 2=0.10%, 4=0.03% 00:15:57.101 cpu : usr=1.20%, sys=4.40%, ctx=2925, majf=0, minf=9 00:15:57.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:57.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.101 issued rwts: total=1387,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:57.101 00:15:57.101 Run status group 0 (all jobs): 00:15:57.101 READ: bw=23.1MiB/s (24.2MB/s), 5542KiB/s-6138KiB/s (5675kB/s-6285kB/s), io=23.1MiB (24.2MB), run=1001-1001msec 00:15:57.101 WRITE: bw=26.3MiB/s (27.6MB/s), 6138KiB/s-7664KiB/s (6285kB/s-7848kB/s), io=26.4MiB (27.6MB), run=1001-1001msec 00:15:57.101 00:15:57.101 Disk stats (read/write): 00:15:57.101 nvme0n1: ios=1457/1536, merge=0/0, ticks=535/350, in_queue=885, util=91.57% 00:15:57.101 nvme0n2: ios=1093/1536, merge=0/0, ticks=406/419, in_queue=825, util=87.64% 00:15:57.101 nvme0n3: ios=1302/1536, merge=0/0, ticks=444/366, in_queue=810, util=89.14% 00:15:57.101 nvme0n4: ios=1057/1499, merge=0/0, ticks=446/414, in_queue=860, util=90.76% 00:15:57.101 07:08:40 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:57.101 [global] 00:15:57.101 thread=1 00:15:57.101 invalidate=1 00:15:57.101 rw=randwrite 00:15:57.101 time_based=1 00:15:57.101 runtime=1 00:15:57.101 ioengine=libaio 00:15:57.101 direct=1 00:15:57.101 bs=4096 00:15:57.101 iodepth=1 00:15:57.101 norandommap=0 00:15:57.101 numjobs=1 00:15:57.101 00:15:57.101 verify_dump=1 00:15:57.101 verify_backlog=512 00:15:57.101 verify_state_save=0 00:15:57.101 do_verify=1 00:15:57.101 verify=crc32c-intel 00:15:57.101 [job0] 00:15:57.101 filename=/dev/nvme0n1 00:15:57.101 [job1] 00:15:57.101 filename=/dev/nvme0n2 00:15:57.101 [job2] 00:15:57.101 filename=/dev/nvme0n3 00:15:57.101 [job3] 00:15:57.101 filename=/dev/nvme0n4 00:15:57.101 Could not set queue depth (nvme0n1) 00:15:57.101 Could not set queue depth (nvme0n2) 00:15:57.101 Could not set queue depth (nvme0n3) 00:15:57.101 Could not set queue depth (nvme0n4) 00:15:57.101 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.101 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.101 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.101 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.101 fio-3.35 00:15:57.101 Starting 4 threads 00:15:58.477 00:15:58.477 job0: (groupid=0, jobs=1): err= 0: pid=75996: Thu Jul 11 07:08:42 2024 00:15:58.477 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:58.477 slat (nsec): min=10246, max=59584, avg=12189.49, stdev=3127.14 00:15:58.477 clat (usec): min=196, max=553, avg=334.23, stdev=34.88 00:15:58.477 lat (usec): min=207, max=566, avg=346.42, stdev=35.20 00:15:58.477 clat percentiles (usec): 00:15:58.477 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 310], 00:15:58.477 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:15:58.477 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 379], 95.00th=[ 396], 00:15:58.477 | 99.00th=[ 441], 99.50th=[ 465], 99.90th=[ 523], 99.95th=[ 553], 00:15:58.477 | 99.99th=[ 553] 00:15:58.477 write: IOPS=1565, BW=6262KiB/s (6412kB/s)(6268KiB/1001msec); 0 zone resets 00:15:58.477 slat (nsec): min=9839, max=87241, avg=19640.62, stdev=5752.26 00:15:58.477 clat (usec): min=116, max=476, avg=275.76, stdev=47.27 00:15:58.477 lat (usec): min=144, max=522, avg=295.40, stdev=47.57 00:15:58.477 clat percentiles (usec): 00:15:58.477 | 1.00th=[ 149], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 237], 00:15:58.477 | 30.00th=[ 247], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 285], 00:15:58.477 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 338], 95.00th=[ 355], 00:15:58.477 | 99.00th=[ 404], 99.50th=[ 433], 99.90th=[ 474], 99.95th=[ 478], 00:15:58.477 | 99.99th=[ 478] 00:15:58.477 bw ( KiB/s): min= 8192, max= 8192, per=25.09%, avg=8192.00, stdev= 0.00, samples=1 00:15:58.477 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:58.477 lat (usec) : 250=16.44%, 500=83.50%, 750=0.06% 00:15:58.477 cpu : usr=1.20%, sys=3.70%, ctx=3103, majf=0, minf=17 00:15:58.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.477 issued rwts: total=1536,1567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.477 job1: (groupid=0, jobs=1): err= 0: pid=75997: Thu Jul 11 07:08:42 2024 00:15:58.477 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:58.477 slat (nsec): min=12792, max=47959, avg=15861.27, stdev=3456.95 00:15:58.477 clat (usec): min=167, max=315, avg=219.12, stdev=20.38 00:15:58.477 lat (usec): min=181, max=329, avg=234.98, stdev=20.75 00:15:58.477 clat percentiles (usec): 00:15:58.477 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:15:58.477 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:15:58.477 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 253], 00:15:58.477 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 310], 99.95th=[ 310], 00:15:58.477 | 99.99th=[ 318] 00:15:58.477 write: IOPS=2530, BW=9.88MiB/s (10.4MB/s)(9.89MiB/1001msec); 0 zone resets 00:15:58.477 slat (nsec): min=18350, max=95320, avg=24061.81, stdev=6134.57 00:15:58.477 clat (usec): min=104, max=713, avg=177.68, stdev=26.87 00:15:58.477 lat (usec): min=126, max=734, avg=201.74, stdev=27.78 00:15:58.477 clat percentiles (usec): 00:15:58.477 | 1.00th=[ 129], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 157], 00:15:58.477 | 30.00th=[ 163], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 184], 00:15:58.477 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 219], 00:15:58.477 | 99.00th=[ 239], 99.50th=[ 253], 99.90th=[ 314], 99.95th=[ 461], 00:15:58.477 | 99.99th=[ 717] 00:15:58.477 bw ( KiB/s): min= 9904, max= 9904, per=30.34%, avg=9904.00, stdev= 0.00, samples=1 00:15:58.477 iops : min= 2476, max= 2476, avg=2476.00, stdev= 0.00, samples=1 00:15:58.477 lat (usec) : 250=96.59%, 500=3.38%, 750=0.02% 00:15:58.477 cpu : usr=1.80%, sys=6.60%, ctx=4584, majf=0, minf=7 00:15:58.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.477 issued rwts: total=2048,2533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.477 job2: (groupid=0, jobs=1): err= 0: pid=75998: Thu Jul 11 07:08:42 2024 00:15:58.477 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:58.477 slat (nsec): min=10295, max=56745, avg=13080.90, stdev=3422.48 00:15:58.477 clat (usec): min=186, max=523, avg=333.37, stdev=34.90 00:15:58.477 lat (usec): min=197, max=535, avg=346.45, stdev=35.36 00:15:58.477 clat percentiles (usec): 00:15:58.477 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 306], 00:15:58.477 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:15:58.477 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 379], 95.00th=[ 396], 00:15:58.477 | 99.00th=[ 437], 99.50th=[ 474], 99.90th=[ 519], 99.95th=[ 523], 00:15:58.477 | 99.99th=[ 523] 00:15:58.477 write: IOPS=1565, BW=6262KiB/s (6412kB/s)(6268KiB/1001msec); 0 zone resets 00:15:58.477 slat (usec): min=9, max=112, avg=19.93, stdev= 6.07 00:15:58.477 clat (usec): min=90, max=484, avg=275.48, stdev=47.58 00:15:58.477 lat (usec): min=153, max=497, avg=295.40, stdev=47.63 00:15:58.477 clat percentiles (usec): 00:15:58.477 | 1.00th=[ 161], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 237], 00:15:58.477 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 281], 00:15:58.477 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 338], 95.00th=[ 359], 00:15:58.478 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 453], 99.95th=[ 486], 00:15:58.478 | 99.99th=[ 486] 00:15:58.478 bw ( KiB/s): min= 8192, max= 8192, per=25.09%, avg=8192.00, stdev= 0.00, samples=1 00:15:58.478 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:58.478 lat (usec) : 100=0.03%, 250=17.24%, 500=82.63%, 750=0.10% 00:15:58.478 cpu : usr=0.80%, sys=4.10%, ctx=3105, majf=0, minf=10 00:15:58.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.478 issued rwts: total=1536,1567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.478 job3: (groupid=0, jobs=1): err= 0: pid=75999: Thu Jul 11 07:08:42 2024 00:15:58.478 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:58.478 slat (nsec): min=13030, max=49134, avg=16207.10, stdev=3748.42 00:15:58.478 clat (usec): min=155, max=331, avg=218.92, stdev=21.91 00:15:58.478 lat (usec): min=169, max=346, avg=235.12, stdev=22.48 00:15:58.478 clat percentiles (usec): 00:15:58.478 | 1.00th=[ 172], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 202], 00:15:58.478 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 223], 00:15:58.478 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 255], 00:15:58.478 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 318], 99.95th=[ 326], 00:15:58.478 | 99.99th=[ 334] 00:15:58.478 write: IOPS=2500, BW=9.77MiB/s (10.2MB/s)(9.78MiB/1001msec); 0 zone resets 00:15:58.478 slat (nsec): min=18389, max=83117, avg=23665.44, stdev=5910.81 00:15:58.478 clat (usec): min=121, max=310, avg=180.51, stdev=24.65 00:15:58.478 lat (usec): min=141, max=366, avg=204.18, stdev=25.59 00:15:58.478 clat percentiles (usec): 00:15:58.478 | 1.00th=[ 133], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 159], 00:15:58.478 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 188], 00:15:58.478 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 223], 00:15:58.478 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 277], 99.95th=[ 289], 00:15:58.478 | 99.99th=[ 310] 00:15:58.478 bw ( KiB/s): min= 9744, max= 9744, per=29.85%, avg=9744.00, stdev= 0.00, samples=1 00:15:58.478 iops : min= 2436, max= 2436, avg=2436.00, stdev= 0.00, samples=1 00:15:58.478 lat (usec) : 250=96.31%, 500=3.69% 00:15:58.478 cpu : usr=1.40%, sys=6.90%, ctx=4553, majf=0, minf=11 00:15:58.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.478 issued rwts: total=2048,2503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.478 00:15:58.478 Run status group 0 (all jobs): 00:15:58.478 READ: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:15:58.478 WRITE: bw=31.9MiB/s (33.4MB/s), 6262KiB/s-9.88MiB/s (6412kB/s-10.4MB/s), io=31.9MiB (33.5MB), run=1001-1001msec 00:15:58.478 00:15:58.478 Disk stats (read/write): 00:15:58.478 nvme0n1: ios=1232/1536, merge=0/0, ticks=440/440, in_queue=880, util=89.38% 00:15:58.478 nvme0n2: ios=1940/2048, merge=0/0, ticks=463/382, in_queue=845, util=89.69% 00:15:58.478 nvme0n3: ios=1182/1536, merge=0/0, ticks=397/444, in_queue=841, util=89.30% 00:15:58.478 nvme0n4: ios=1875/2048, merge=0/0, ticks=441/397, in_queue=838, util=89.96% 00:15:58.478 07:08:42 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:58.478 [global] 00:15:58.478 thread=1 00:15:58.478 invalidate=1 00:15:58.478 rw=write 00:15:58.478 time_based=1 00:15:58.478 runtime=1 00:15:58.478 ioengine=libaio 00:15:58.478 direct=1 00:15:58.478 bs=4096 00:15:58.478 iodepth=128 00:15:58.478 norandommap=0 00:15:58.478 numjobs=1 00:15:58.478 00:15:58.478 verify_dump=1 00:15:58.478 verify_backlog=512 00:15:58.478 verify_state_save=0 00:15:58.478 do_verify=1 00:15:58.478 verify=crc32c-intel 00:15:58.478 [job0] 00:15:58.478 filename=/dev/nvme0n1 00:15:58.478 [job1] 00:15:58.478 filename=/dev/nvme0n2 00:15:58.478 [job2] 00:15:58.478 filename=/dev/nvme0n3 00:15:58.478 [job3] 00:15:58.478 filename=/dev/nvme0n4 00:15:58.478 Could not set queue depth (nvme0n1) 00:15:58.478 Could not set queue depth (nvme0n2) 00:15:58.478 Could not set queue depth (nvme0n3) 00:15:58.478 Could not set queue depth (nvme0n4) 00:15:58.478 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.478 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.478 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.478 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.478 fio-3.35 00:15:58.478 Starting 4 threads 00:15:59.853 00:15:59.853 job0: (groupid=0, jobs=1): err= 0: pid=76057: Thu Jul 11 07:08:43 2024 00:15:59.853 read: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(9.89MiB/1004msec) 00:15:59.853 slat (usec): min=5, max=11194, avg=191.14, stdev=767.54 00:15:59.853 clat (usec): min=686, max=30866, avg=25200.47, stdev=3475.36 00:15:59.853 lat (usec): min=3688, max=30882, avg=25391.61, stdev=3386.06 00:15:59.853 clat percentiles (usec): 00:15:59.853 | 1.00th=[ 7504], 5.00th=[21627], 10.00th=[23725], 20.00th=[24249], 00:15:59.853 | 30.00th=[24511], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:15:59.853 | 70.00th=[25560], 80.00th=[28443], 90.00th=[29492], 95.00th=[29754], 00:15:59.853 | 99.00th=[30802], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:15:59.853 | 99.99th=[30802] 00:15:59.854 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:15:59.854 slat (usec): min=19, max=8824, avg=192.65, stdev=903.99 00:15:59.854 clat (usec): min=15465, max=31356, avg=24376.37, stdev=2256.35 00:15:59.854 lat (usec): min=18286, max=31383, avg=24569.02, stdev=2112.52 00:15:59.854 clat percentiles (usec): 00:15:59.854 | 1.00th=[19268], 5.00th=[20317], 10.00th=[20841], 20.00th=[22414], 00:15:59.854 | 30.00th=[23462], 40.00th=[24249], 50.00th=[24511], 60.00th=[25297], 00:15:59.854 | 70.00th=[25822], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:15:59.854 | 99.00th=[30540], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:15:59.854 | 99.99th=[31327] 00:15:59.854 bw ( KiB/s): min= 9872, max=10629, per=25.94%, avg=10250.50, stdev=535.28, samples=2 00:15:59.854 iops : min= 2468, max= 2657, avg=2562.50, stdev=133.64, samples=2 00:15:59.854 lat (usec) : 750=0.02% 00:15:59.854 lat (msec) : 4=0.08%, 10=0.55%, 20=2.69%, 50=96.66% 00:15:59.854 cpu : usr=2.79%, sys=9.07%, ctx=231, majf=0, minf=13 00:15:59.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:59.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.854 issued rwts: total=2533,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.854 job1: (groupid=0, jobs=1): err= 0: pid=76059: Thu Jul 11 07:08:43 2024 00:15:59.854 read: IOPS=2328, BW=9313KiB/s (9537kB/s)(9360KiB/1005msec) 00:15:59.854 slat (usec): min=6, max=10779, avg=205.63, stdev=980.54 00:15:59.854 clat (usec): min=1797, max=40931, avg=25671.66, stdev=4828.66 00:15:59.854 lat (usec): min=7761, max=43040, avg=25877.29, stdev=4899.75 00:15:59.854 clat percentiles (usec): 00:15:59.854 | 1.00th=[13173], 5.00th=[19792], 10.00th=[20579], 20.00th=[22414], 00:15:59.854 | 30.00th=[22938], 40.00th=[23725], 50.00th=[24511], 60.00th=[25822], 00:15:59.854 | 70.00th=[27395], 80.00th=[30540], 90.00th=[32900], 95.00th=[33817], 00:15:59.854 | 99.00th=[36439], 99.50th=[38011], 99.90th=[41157], 99.95th=[41157], 00:15:59.854 | 99.99th=[41157] 00:15:59.854 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:15:59.854 slat (usec): min=12, max=7470, avg=195.54, stdev=848.98 00:15:59.854 clat (usec): min=14633, max=46305, avg=26047.20, stdev=6626.53 00:15:59.854 lat (usec): min=14654, max=46330, avg=26242.73, stdev=6683.38 00:15:59.854 clat percentiles (usec): 00:15:59.854 | 1.00th=[16319], 5.00th=[17171], 10.00th=[18220], 20.00th=[19530], 00:15:59.854 | 30.00th=[20579], 40.00th=[24773], 50.00th=[25297], 60.00th=[28443], 00:15:59.854 | 70.00th=[29230], 80.00th=[30278], 90.00th=[35390], 95.00th=[40633], 00:15:59.854 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:15:59.854 | 99.99th=[46400] 00:15:59.854 bw ( KiB/s): min= 8192, max=12288, per=25.92%, avg=10240.00, stdev=2896.31, samples=2 00:15:59.854 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:15:59.854 lat (msec) : 2=0.02%, 10=0.45%, 20=16.51%, 50=83.02% 00:15:59.854 cpu : usr=3.19%, sys=6.57%, ctx=269, majf=0, minf=11 00:15:59.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:59.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.854 issued rwts: total=2340,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.854 job2: (groupid=0, jobs=1): err= 0: pid=76060: Thu Jul 11 07:08:43 2024 00:15:59.854 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:15:59.854 slat (usec): min=6, max=12366, avg=263.80, stdev=1201.35 00:15:59.854 clat (usec): min=20842, max=53217, avg=34288.60, stdev=7280.11 00:15:59.854 lat (usec): min=24488, max=53233, avg=34552.40, stdev=7247.62 00:15:59.854 clat percentiles (usec): 00:15:59.854 | 1.00th=[24773], 5.00th=[25560], 10.00th=[28181], 20.00th=[28967], 00:15:59.854 | 30.00th=[29492], 40.00th=[30540], 50.00th=[31327], 60.00th=[32900], 00:15:59.854 | 70.00th=[35914], 80.00th=[39584], 90.00th=[46924], 95.00th=[50594], 00:15:59.854 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:15:59.854 | 99.99th=[53216] 00:15:59.854 write: IOPS=2214, BW=8857KiB/s (9069kB/s)(8892KiB/1004msec); 0 zone resets 00:15:59.854 slat (usec): min=14, max=8163, avg=197.02, stdev=957.72 00:15:59.854 clat (usec): min=3269, max=34522, avg=25024.41, stdev=5214.70 00:15:59.854 lat (usec): min=3320, max=34553, avg=25221.43, stdev=5162.70 00:15:59.854 clat percentiles (usec): 00:15:59.854 | 1.00th=[ 8586], 5.00th=[18744], 10.00th=[20579], 20.00th=[21890], 00:15:59.854 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[25297], 00:15:59.854 | 70.00th=[27132], 80.00th=[28967], 90.00th=[33424], 95.00th=[33817], 00:15:59.854 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:15:59.854 | 99.99th=[34341] 00:15:59.854 bw ( KiB/s): min= 8208, max= 8576, per=21.24%, avg=8392.00, stdev=260.22, samples=2 00:15:59.854 iops : min= 2052, max= 2144, avg=2098.00, stdev=65.05, samples=2 00:15:59.854 lat (msec) : 4=0.35%, 10=0.75%, 20=2.39%, 50=93.96%, 100=2.55% 00:15:59.854 cpu : usr=1.89%, sys=8.08%, ctx=188, majf=0, minf=7 00:15:59.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:15:59.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.854 issued rwts: total=2048,2223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.854 job3: (groupid=0, jobs=1): err= 0: pid=76061: Thu Jul 11 07:08:43 2024 00:15:59.854 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:15:59.854 slat (usec): min=10, max=6398, avg=192.24, stdev=811.01 00:15:59.854 clat (usec): min=7554, max=31320, avg=24015.32, stdev=2853.20 00:15:59.854 lat (usec): min=7569, max=31338, avg=24207.55, stdev=2772.64 00:15:59.854 clat percentiles (usec): 00:15:59.854 | 1.00th=[11994], 5.00th=[19792], 10.00th=[20841], 20.00th=[22414], 00:15:59.854 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24511], 60.00th=[24511], 00:15:59.854 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26870], 95.00th=[27657], 00:15:59.854 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:15:59.854 | 99.99th=[31327] 00:15:59.854 write: IOPS=2573, BW=10.1MiB/s (10.5MB/s)(10.1MiB/1004msec); 0 zone resets 00:15:59.854 slat (usec): min=13, max=6610, avg=188.21, stdev=886.39 00:15:59.854 clat (usec): min=806, max=31613, avg=25036.09, stdev=3451.58 00:15:59.854 lat (usec): min=5368, max=31640, avg=25224.31, stdev=3349.35 00:15:59.854 clat percentiles (usec): 00:15:59.854 | 1.00th=[14615], 5.00th=[19268], 10.00th=[20841], 20.00th=[22938], 00:15:59.854 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25560], 60.00th=[26084], 00:15:59.854 | 70.00th=[26608], 80.00th=[26870], 90.00th=[28705], 95.00th=[30278], 00:15:59.854 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:15:59.854 | 99.99th=[31589] 00:15:59.854 bw ( KiB/s): min= 9024, max=11478, per=25.95%, avg=10251.00, stdev=1735.24, samples=2 00:15:59.854 iops : min= 2256, max= 2869, avg=2562.50, stdev=433.46, samples=2 00:15:59.854 lat (usec) : 1000=0.02% 00:15:59.854 lat (msec) : 10=0.62%, 20=6.65%, 50=92.71% 00:15:59.854 cpu : usr=3.09%, sys=8.08%, ctx=246, majf=0, minf=10 00:15:59.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:59.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.854 issued rwts: total=2560,2584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.854 00:15:59.854 Run status group 0 (all jobs): 00:15:59.854 READ: bw=36.9MiB/s (38.6MB/s), 8159KiB/s-9.96MiB/s (8355kB/s-10.4MB/s), io=37.0MiB (38.8MB), run=1004-1005msec 00:15:59.854 WRITE: bw=38.6MiB/s (40.5MB/s), 8857KiB/s-10.1MiB/s (9069kB/s-10.5MB/s), io=38.8MiB (40.7MB), run=1004-1005msec 00:15:59.854 00:15:59.854 Disk stats (read/write): 00:15:59.854 nvme0n1: ios=2097/2244, merge=0/0, ticks=12490/12410, in_queue=24900, util=87.85% 00:15:59.854 nvme0n2: ios=2084/2191, merge=0/0, ticks=17119/16940, in_queue=34059, util=89.01% 00:15:59.854 nvme0n3: ios=1544/2048, merge=0/0, ticks=13641/11699, in_queue=25340, util=89.12% 00:15:59.854 nvme0n4: ios=2048/2359, merge=0/0, ticks=12367/12963, in_queue=25330, util=89.68% 00:15:59.854 07:08:43 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:59.854 [global] 00:15:59.854 thread=1 00:15:59.854 invalidate=1 00:15:59.854 rw=randwrite 00:15:59.854 time_based=1 00:15:59.854 runtime=1 00:15:59.854 ioengine=libaio 00:15:59.854 direct=1 00:15:59.854 bs=4096 00:15:59.854 iodepth=128 00:15:59.854 norandommap=0 00:15:59.854 numjobs=1 00:15:59.854 00:15:59.854 verify_dump=1 00:15:59.854 verify_backlog=512 00:15:59.854 verify_state_save=0 00:15:59.854 do_verify=1 00:15:59.854 verify=crc32c-intel 00:15:59.854 [job0] 00:15:59.854 filename=/dev/nvme0n1 00:15:59.854 [job1] 00:15:59.854 filename=/dev/nvme0n2 00:15:59.854 [job2] 00:15:59.854 filename=/dev/nvme0n3 00:15:59.854 [job3] 00:15:59.854 filename=/dev/nvme0n4 00:15:59.854 Could not set queue depth (nvme0n1) 00:15:59.854 Could not set queue depth (nvme0n2) 00:15:59.854 Could not set queue depth (nvme0n3) 00:15:59.854 Could not set queue depth (nvme0n4) 00:15:59.854 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.854 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.854 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.854 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.854 fio-3.35 00:15:59.854 Starting 4 threads 00:16:01.231 00:16:01.231 job0: (groupid=0, jobs=1): err= 0: pid=76122: Thu Jul 11 07:08:45 2024 00:16:01.231 read: IOPS=1915, BW=7661KiB/s (7845kB/s)(7684KiB/1003msec) 00:16:01.231 slat (usec): min=5, max=18094, avg=247.75, stdev=1317.38 00:16:01.231 clat (usec): min=622, max=51305, avg=29187.58, stdev=6577.53 00:16:01.231 lat (usec): min=2612, max=52377, avg=29435.33, stdev=6668.53 00:16:01.231 clat percentiles (usec): 00:16:01.231 | 1.00th=[ 8586], 5.00th=[19792], 10.00th=[22152], 20.00th=[25297], 00:16:01.231 | 30.00th=[26346], 40.00th=[28181], 50.00th=[30016], 60.00th=[30802], 00:16:01.231 | 70.00th=[31851], 80.00th=[33162], 90.00th=[35914], 95.00th=[39584], 00:16:01.231 | 99.00th=[45876], 99.50th=[47973], 99.90th=[51119], 99.95th=[51119], 00:16:01.231 | 99.99th=[51119] 00:16:01.231 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:16:01.231 slat (usec): min=5, max=14706, avg=247.87, stdev=995.32 00:16:01.231 clat (usec): min=13910, max=52352, avg=34214.23, stdev=4881.56 00:16:01.231 lat (usec): min=13933, max=53372, avg=34462.11, stdev=4983.17 00:16:01.231 clat percentiles (usec): 00:16:01.231 | 1.00th=[18482], 5.00th=[24511], 10.00th=[29492], 20.00th=[31065], 00:16:01.231 | 30.00th=[33424], 40.00th=[34341], 50.00th=[34866], 60.00th=[35390], 00:16:01.231 | 70.00th=[36439], 80.00th=[36963], 90.00th=[38011], 95.00th=[41157], 00:16:01.231 | 99.00th=[46924], 99.50th=[49021], 99.90th=[52167], 99.95th=[52167], 00:16:01.231 | 99.99th=[52167] 00:16:01.231 bw ( KiB/s): min= 8192, max= 8192, per=16.14%, avg=8192.00, stdev= 0.00, samples=2 00:16:01.231 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:16:01.231 lat (usec) : 750=0.03% 00:16:01.231 lat (msec) : 4=0.20%, 10=1.39%, 20=1.99%, 50=96.09%, 100=0.30% 00:16:01.231 cpu : usr=2.40%, sys=5.89%, ctx=684, majf=0, minf=13 00:16:01.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:01.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.231 issued rwts: total=1921,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.231 job1: (groupid=0, jobs=1): err= 0: pid=76123: Thu Jul 11 07:08:45 2024 00:16:01.231 read: IOPS=4138, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1002msec) 00:16:01.231 slat (usec): min=5, max=13146, avg=110.37, stdev=716.44 00:16:01.231 clat (usec): min=489, max=29009, avg=14640.10, stdev=3033.63 00:16:01.231 lat (usec): min=3655, max=29033, avg=14750.47, stdev=3060.60 00:16:01.231 clat percentiles (usec): 00:16:01.231 | 1.00th=[ 4228], 5.00th=[10945], 10.00th=[11600], 20.00th=[12518], 00:16:01.231 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14484], 60.00th=[15270], 00:16:01.231 | 70.00th=[15926], 80.00th=[16450], 90.00th=[17433], 95.00th=[18744], 00:16:01.231 | 99.00th=[25822], 99.50th=[27395], 99.90th=[28967], 99.95th=[28967], 00:16:01.231 | 99.99th=[28967] 00:16:01.231 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:16:01.231 slat (usec): min=5, max=12898, avg=110.04, stdev=728.53 00:16:01.231 clat (usec): min=2497, max=28971, avg=14382.02, stdev=2915.87 00:16:01.231 lat (usec): min=2521, max=28983, avg=14492.05, stdev=2982.94 00:16:01.231 clat percentiles (usec): 00:16:01.231 | 1.00th=[ 4686], 5.00th=[ 9241], 10.00th=[11076], 20.00th=[12911], 00:16:01.231 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14484], 60.00th=[15008], 00:16:01.231 | 70.00th=[15664], 80.00th=[16319], 90.00th=[17171], 95.00th=[17957], 00:16:01.231 | 99.00th=[22938], 99.50th=[25035], 99.90th=[27132], 99.95th=[27132], 00:16:01.231 | 99.99th=[28967] 00:16:01.231 bw ( KiB/s): min=17536, max=18720, per=35.71%, avg=18128.00, stdev=837.21, samples=2 00:16:01.231 iops : min= 4384, max= 4680, avg=4532.00, stdev=209.30, samples=2 00:16:01.231 lat (usec) : 500=0.01% 00:16:01.231 lat (msec) : 4=0.73%, 10=3.51%, 20=92.94%, 50=2.81% 00:16:01.231 cpu : usr=3.40%, sys=12.69%, ctx=422, majf=0, minf=7 00:16:01.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:01.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.231 issued rwts: total=4147,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.231 job2: (groupid=0, jobs=1): err= 0: pid=76124: Thu Jul 11 07:08:45 2024 00:16:01.231 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:16:01.231 slat (usec): min=5, max=15782, avg=135.03, stdev=899.96 00:16:01.231 clat (usec): min=5674, max=34156, avg=17329.41, stdev=4374.99 00:16:01.231 lat (usec): min=5690, max=34169, avg=17464.45, stdev=4430.00 00:16:01.231 clat percentiles (usec): 00:16:01.231 | 1.00th=[ 7308], 5.00th=[11731], 10.00th=[12518], 20.00th=[14746], 00:16:01.231 | 30.00th=[15008], 40.00th=[15795], 50.00th=[16188], 60.00th=[16909], 00:16:01.231 | 70.00th=[18220], 80.00th=[20055], 90.00th=[22676], 95.00th=[27395], 00:16:01.231 | 99.00th=[31851], 99.50th=[32375], 99.90th=[34341], 99.95th=[34341], 00:16:01.231 | 99.99th=[34341] 00:16:01.231 write: IOPS=4040, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1006msec); 0 zone resets 00:16:01.231 slat (usec): min=5, max=15893, avg=119.79, stdev=786.93 00:16:01.231 clat (usec): min=536, max=34688, avg=16069.26, stdev=3599.69 00:16:01.231 lat (usec): min=3687, max=34783, avg=16189.05, stdev=3681.56 00:16:01.231 clat percentiles (usec): 00:16:01.231 | 1.00th=[ 5604], 5.00th=[ 8455], 10.00th=[11076], 20.00th=[13960], 00:16:01.231 | 30.00th=[15008], 40.00th=[15795], 50.00th=[16712], 60.00th=[17433], 00:16:01.231 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19792], 95.00th=[20317], 00:16:01.231 | 99.00th=[21365], 99.50th=[25822], 99.90th=[32113], 99.95th=[32375], 00:16:01.231 | 99.99th=[34866] 00:16:01.231 bw ( KiB/s): min=15112, max=16416, per=31.05%, avg=15764.00, stdev=922.07, samples=2 00:16:01.231 iops : min= 3778, max= 4104, avg=3941.00, stdev=230.52, samples=2 00:16:01.231 lat (usec) : 750=0.01% 00:16:01.231 lat (msec) : 4=0.07%, 10=4.07%, 20=82.31%, 50=13.54% 00:16:01.231 cpu : usr=3.68%, sys=10.95%, ctx=426, majf=0, minf=8 00:16:01.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:01.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.231 issued rwts: total=3584,4065,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.232 job3: (groupid=0, jobs=1): err= 0: pid=76125: Thu Jul 11 07:08:45 2024 00:16:01.232 read: IOPS=1846, BW=7386KiB/s (7564kB/s)(7416KiB/1004msec) 00:16:01.232 slat (usec): min=5, max=15404, avg=257.88, stdev=1355.54 00:16:01.232 clat (usec): min=2512, max=46115, avg=29929.90, stdev=5226.20 00:16:01.232 lat (usec): min=7596, max=46132, avg=30187.78, stdev=5327.33 00:16:01.232 clat percentiles (usec): 00:16:01.232 | 1.00th=[14484], 5.00th=[21365], 10.00th=[25297], 20.00th=[26870], 00:16:01.232 | 30.00th=[28705], 40.00th=[29754], 50.00th=[30016], 60.00th=[30802], 00:16:01.232 | 70.00th=[31589], 80.00th=[32637], 90.00th=[34866], 95.00th=[39584], 00:16:01.232 | 99.00th=[43779], 99.50th=[44303], 99.90th=[45876], 99.95th=[45876], 00:16:01.232 | 99.99th=[45876] 00:16:01.232 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:16:01.232 slat (usec): min=5, max=14814, avg=246.55, stdev=1000.83 00:16:01.232 clat (usec): min=17301, max=50044, avg=34473.64, stdev=4354.34 00:16:01.232 lat (usec): min=17332, max=50075, avg=34720.19, stdev=4476.64 00:16:01.232 clat percentiles (usec): 00:16:01.232 | 1.00th=[22676], 5.00th=[26870], 10.00th=[29492], 20.00th=[31065], 00:16:01.232 | 30.00th=[33162], 40.00th=[34341], 50.00th=[35390], 60.00th=[35390], 00:16:01.232 | 70.00th=[35914], 80.00th=[36963], 90.00th=[38536], 95.00th=[41681], 00:16:01.232 | 99.00th=[47449], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:16:01.232 | 99.99th=[50070] 00:16:01.232 bw ( KiB/s): min= 8192, max= 8208, per=16.15%, avg=8200.00, stdev=11.31, samples=2 00:16:01.232 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:16:01.232 lat (msec) : 4=0.03%, 10=0.23%, 20=1.85%, 50=97.87%, 100=0.03% 00:16:01.232 cpu : usr=2.79%, sys=5.38%, ctx=645, majf=0, minf=17 00:16:01.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:01.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.232 issued rwts: total=1854,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.232 00:16:01.232 Run status group 0 (all jobs): 00:16:01.232 READ: bw=44.7MiB/s (46.8MB/s), 7386KiB/s-16.2MiB/s (7564kB/s-17.0MB/s), io=44.9MiB (47.1MB), run=1002-1006msec 00:16:01.232 WRITE: bw=49.6MiB/s (52.0MB/s), 8159KiB/s-18.0MiB/s (8355kB/s-18.8MB/s), io=49.9MiB (52.3MB), run=1002-1006msec 00:16:01.232 00:16:01.232 Disk stats (read/write): 00:16:01.232 nvme0n1: ios=1586/1831, merge=0/0, ticks=22650/28857, in_queue=51507, util=87.66% 00:16:01.232 nvme0n2: ios=3633/4047, merge=0/0, ticks=43447/45776, in_queue=89223, util=89.57% 00:16:01.232 nvme0n3: ios=3104/3407, merge=0/0, ticks=50057/53295, in_queue=103352, util=90.01% 00:16:01.232 nvme0n4: ios=1542/1808, merge=0/0, ticks=22876/28768, in_queue=51644, util=89.52% 00:16:01.232 07:08:45 -- target/fio.sh@55 -- # sync 00:16:01.232 07:08:45 -- target/fio.sh@59 -- # fio_pid=76139 00:16:01.232 07:08:45 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:01.232 07:08:45 -- target/fio.sh@61 -- # sleep 3 00:16:01.232 [global] 00:16:01.232 thread=1 00:16:01.232 invalidate=1 00:16:01.232 rw=read 00:16:01.232 time_based=1 00:16:01.232 runtime=10 00:16:01.232 ioengine=libaio 00:16:01.232 direct=1 00:16:01.232 bs=4096 00:16:01.232 iodepth=1 00:16:01.232 norandommap=1 00:16:01.232 numjobs=1 00:16:01.232 00:16:01.232 [job0] 00:16:01.232 filename=/dev/nvme0n1 00:16:01.232 [job1] 00:16:01.232 filename=/dev/nvme0n2 00:16:01.232 [job2] 00:16:01.232 filename=/dev/nvme0n3 00:16:01.232 [job3] 00:16:01.232 filename=/dev/nvme0n4 00:16:01.232 Could not set queue depth (nvme0n1) 00:16:01.232 Could not set queue depth (nvme0n2) 00:16:01.232 Could not set queue depth (nvme0n3) 00:16:01.232 Could not set queue depth (nvme0n4) 00:16:01.232 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.232 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.232 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.232 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.232 fio-3.35 00:16:01.232 Starting 4 threads 00:16:04.518 07:08:48 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:04.518 fio: pid=76182, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:04.518 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=32108544, buflen=4096 00:16:04.518 07:08:48 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:04.518 fio: pid=76181, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:04.518 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=43741184, buflen=4096 00:16:04.518 07:08:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:04.518 07:08:48 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:04.777 fio: pid=76179, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:04.777 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=38178816, buflen=4096 00:16:04.777 07:08:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:04.777 07:08:48 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:05.036 fio: pid=76180, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:05.036 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=31686656, buflen=4096 00:16:05.036 00:16:05.036 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76179: Thu Jul 11 07:08:49 2024 00:16:05.036 read: IOPS=2783, BW=10.9MiB/s (11.4MB/s)(36.4MiB/3349msec) 00:16:05.036 slat (usec): min=7, max=11800, avg=23.67, stdev=218.34 00:16:05.036 clat (usec): min=3, max=3881, avg=333.78, stdev=89.87 00:16:05.036 lat (usec): min=124, max=11999, avg=357.45, stdev=234.38 00:16:05.036 clat percentiles (usec): 00:16:05.036 | 1.00th=[ 137], 5.00th=[ 200], 10.00th=[ 233], 20.00th=[ 293], 00:16:05.036 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 347], 00:16:05.036 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 412], 95.00th=[ 441], 00:16:05.036 | 99.00th=[ 502], 99.50th=[ 553], 99.90th=[ 1037], 99.95th=[ 1352], 00:16:05.036 | 99.99th=[ 3884] 00:16:05.036 bw ( KiB/s): min= 9496, max=11528, per=26.82%, avg=10676.17, stdev=861.51, samples=6 00:16:05.036 iops : min= 2374, max= 2882, avg=2669.00, stdev=215.36, samples=6 00:16:05.036 lat (usec) : 4=0.04%, 250=11.21%, 500=87.72%, 750=0.80%, 1000=0.09% 00:16:05.036 lat (msec) : 2=0.11%, 4=0.02% 00:16:05.036 cpu : usr=1.05%, sys=4.24%, ctx=9361, majf=0, minf=1 00:16:05.036 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.036 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.036 issued rwts: total=9322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.036 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.036 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76180: Thu Jul 11 07:08:49 2024 00:16:05.036 read: IOPS=2164, BW=8656KiB/s (8863kB/s)(30.2MiB/3575msec) 00:16:05.036 slat (usec): min=9, max=15168, avg=32.73, stdev=279.54 00:16:05.036 clat (nsec): min=1463, max=3087.4k, avg=426572.88, stdev=192215.31 00:16:05.036 lat (usec): min=121, max=15413, avg=459.31, stdev=338.96 00:16:05.036 clat percentiles (usec): 00:16:05.036 | 1.00th=[ 117], 5.00th=[ 125], 10.00th=[ 133], 20.00th=[ 196], 00:16:05.036 | 30.00th=[ 265], 40.00th=[ 494], 50.00th=[ 519], 60.00th=[ 537], 00:16:05.036 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 611], 95.00th=[ 635], 00:16:05.036 | 99.00th=[ 685], 99.50th=[ 709], 99.90th=[ 1123], 99.95th=[ 1680], 00:16:05.036 | 99.99th=[ 3097] 00:16:05.036 bw ( KiB/s): min= 6586, max= 7296, per=17.18%, avg=6837.67, stdev=263.48, samples=6 00:16:05.036 iops : min= 1646, max= 1824, avg=1709.33, stdev=65.97, samples=6 00:16:05.036 lat (usec) : 2=0.04%, 20=0.01%, 50=0.01%, 100=0.03%, 250=28.18% 00:16:05.036 lat (usec) : 500=13.88%, 750=57.63%, 1000=0.10% 00:16:05.036 lat (msec) : 2=0.08%, 4=0.03% 00:16:05.036 cpu : usr=1.04%, sys=4.90%, ctx=7785, majf=0, minf=1 00:16:05.036 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.037 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.037 issued rwts: total=7737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.037 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76181: Thu Jul 11 07:08:49 2024 00:16:05.037 read: IOPS=3377, BW=13.2MiB/s (13.8MB/s)(41.7MiB/3162msec) 00:16:05.037 slat (usec): min=10, max=7747, avg=18.16, stdev=103.92 00:16:05.037 clat (usec): min=120, max=3748, avg=276.11, stdev=79.55 00:16:05.037 lat (usec): min=154, max=7992, avg=294.27, stdev=130.55 00:16:05.037 clat percentiles (usec): 00:16:05.037 | 1.00th=[ 165], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 212], 00:16:05.037 | 30.00th=[ 223], 40.00th=[ 235], 50.00th=[ 260], 60.00th=[ 310], 00:16:05.037 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 359], 95.00th=[ 375], 00:16:05.037 | 99.00th=[ 420], 99.50th=[ 437], 99.90th=[ 594], 99.95th=[ 857], 00:16:05.037 | 99.99th=[ 2769] 00:16:05.037 bw ( KiB/s): min=12782, max=13856, per=33.64%, avg=13389.00, stdev=421.19, samples=6 00:16:05.037 iops : min= 3195, max= 3464, avg=3347.17, stdev=105.44, samples=6 00:16:05.037 lat (usec) : 250=47.55%, 500=52.28%, 750=0.08%, 1000=0.04% 00:16:05.037 lat (msec) : 2=0.02%, 4=0.02% 00:16:05.037 cpu : usr=1.17%, sys=4.43%, ctx=10687, majf=0, minf=1 00:16:05.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.037 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.037 issued rwts: total=10680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.037 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76182: Thu Jul 11 07:08:49 2024 00:16:05.037 read: IOPS=2672, BW=10.4MiB/s (10.9MB/s)(30.6MiB/2934msec) 00:16:05.037 slat (usec): min=16, max=124, avg=28.13, stdev= 8.85 00:16:05.037 clat (usec): min=170, max=2552, avg=343.22, stdev=60.56 00:16:05.037 lat (usec): min=188, max=2574, avg=371.35, stdev=62.36 00:16:05.037 clat percentiles (usec): 00:16:05.037 | 1.00th=[ 243], 5.00th=[ 277], 10.00th=[ 293], 20.00th=[ 310], 00:16:05.037 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 347], 00:16:05.037 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 408], 95.00th=[ 437], 00:16:05.037 | 99.00th=[ 486], 99.50th=[ 502], 99.90th=[ 734], 99.95th=[ 783], 00:16:05.037 | 99.99th=[ 2540] 00:16:05.037 bw ( KiB/s): min= 9696, max=11176, per=26.52%, avg=10556.80, stdev=768.70, samples=5 00:16:05.037 iops : min= 2424, max= 2794, avg=2639.20, stdev=192.17, samples=5 00:16:05.037 lat (usec) : 250=1.14%, 500=98.28%, 750=0.50%, 1000=0.05% 00:16:05.037 lat (msec) : 4=0.03% 00:16:05.037 cpu : usr=1.36%, sys=6.17%, ctx=7840, majf=0, minf=1 00:16:05.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.037 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.037 issued rwts: total=7840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.037 00:16:05.037 Run status group 0 (all jobs): 00:16:05.037 READ: bw=38.9MiB/s (40.8MB/s), 8656KiB/s-13.2MiB/s (8863kB/s-13.8MB/s), io=139MiB (146MB), run=2934-3575msec 00:16:05.037 00:16:05.037 Disk stats (read/write): 00:16:05.037 nvme0n1: ios=8402/0, merge=0/0, ticks=2987/0, in_queue=2987, util=95.35% 00:16:05.037 nvme0n2: ios=6462/0, merge=0/0, ticks=3149/0, in_queue=3149, util=95.21% 00:16:05.037 nvme0n3: ios=10504/0, merge=0/0, ticks=3003/0, in_queue=3003, util=96.37% 00:16:05.037 nvme0n4: ios=7648/0, merge=0/0, ticks=2721/0, in_queue=2721, util=96.69% 00:16:05.037 07:08:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.037 07:08:49 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:05.295 07:08:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.295 07:08:49 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:05.554 07:08:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.554 07:08:49 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:05.813 07:08:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.813 07:08:49 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:06.072 07:08:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:06.072 07:08:49 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:06.329 07:08:50 -- target/fio.sh@69 -- # fio_status=0 00:16:06.329 07:08:50 -- target/fio.sh@70 -- # wait 76139 00:16:06.329 07:08:50 -- target/fio.sh@70 -- # fio_status=4 00:16:06.330 07:08:50 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:06.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.330 07:08:50 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:06.330 07:08:50 -- common/autotest_common.sh@1198 -- # local i=0 00:16:06.330 07:08:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:06.330 07:08:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.330 07:08:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:06.330 07:08:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.330 nvmf hotplug test: fio failed as expected 00:16:06.330 07:08:50 -- common/autotest_common.sh@1210 -- # return 0 00:16:06.330 07:08:50 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:06.330 07:08:50 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:06.330 07:08:50 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:06.587 07:08:50 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:06.587 07:08:50 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:06.587 07:08:50 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:06.587 07:08:50 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:06.587 07:08:50 -- target/fio.sh@91 -- # nvmftestfini 00:16:06.587 07:08:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:06.587 07:08:50 -- nvmf/common.sh@116 -- # sync 00:16:06.587 07:08:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:06.587 07:08:50 -- nvmf/common.sh@119 -- # set +e 00:16:06.587 07:08:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:06.587 07:08:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:06.587 rmmod nvme_tcp 00:16:06.587 rmmod nvme_fabrics 00:16:06.587 rmmod nvme_keyring 00:16:06.587 07:08:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:06.587 07:08:50 -- nvmf/common.sh@123 -- # set -e 00:16:06.587 07:08:50 -- nvmf/common.sh@124 -- # return 0 00:16:06.587 07:08:50 -- nvmf/common.sh@477 -- # '[' -n 75646 ']' 00:16:06.587 07:08:50 -- nvmf/common.sh@478 -- # killprocess 75646 00:16:06.587 07:08:50 -- common/autotest_common.sh@926 -- # '[' -z 75646 ']' 00:16:06.587 07:08:50 -- common/autotest_common.sh@930 -- # kill -0 75646 00:16:06.587 07:08:50 -- common/autotest_common.sh@931 -- # uname 00:16:06.587 07:08:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:06.587 07:08:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75646 00:16:06.587 killing process with pid 75646 00:16:06.587 07:08:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:06.587 07:08:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:06.587 07:08:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75646' 00:16:06.587 07:08:50 -- common/autotest_common.sh@945 -- # kill 75646 00:16:06.587 07:08:50 -- common/autotest_common.sh@950 -- # wait 75646 00:16:06.844 07:08:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:06.844 07:08:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:06.844 07:08:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:06.844 07:08:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.844 07:08:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:06.844 07:08:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.844 07:08:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.844 07:08:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.102 07:08:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:07.102 00:16:07.102 real 0m18.935s 00:16:07.102 user 1m13.342s 00:16:07.102 sys 0m7.142s 00:16:07.102 07:08:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.102 07:08:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.102 ************************************ 00:16:07.102 END TEST nvmf_fio_target 00:16:07.102 ************************************ 00:16:07.102 07:08:50 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:07.102 07:08:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:07.102 07:08:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:07.102 07:08:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.102 ************************************ 00:16:07.102 START TEST nvmf_bdevio 00:16:07.102 ************************************ 00:16:07.102 07:08:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:07.102 * Looking for test storage... 00:16:07.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:07.102 07:08:51 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:07.102 07:08:51 -- nvmf/common.sh@7 -- # uname -s 00:16:07.102 07:08:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.102 07:08:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.102 07:08:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.102 07:08:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.102 07:08:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.102 07:08:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.102 07:08:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.102 07:08:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.102 07:08:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.102 07:08:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.102 07:08:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:16:07.102 07:08:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:16:07.102 07:08:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.102 07:08:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.102 07:08:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:07.102 07:08:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:07.102 07:08:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.102 07:08:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.102 07:08:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.102 07:08:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.102 07:08:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.102 07:08:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.102 07:08:51 -- paths/export.sh@5 -- # export PATH 00:16:07.102 07:08:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.102 07:08:51 -- nvmf/common.sh@46 -- # : 0 00:16:07.102 07:08:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:07.102 07:08:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:07.102 07:08:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:07.102 07:08:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.102 07:08:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.102 07:08:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:07.102 07:08:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:07.102 07:08:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:07.102 07:08:51 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:07.102 07:08:51 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:07.102 07:08:51 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:07.102 07:08:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:07.102 07:08:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.102 07:08:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:07.102 07:08:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:07.102 07:08:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:07.102 07:08:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.102 07:08:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.102 07:08:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.102 07:08:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:07.102 07:08:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:07.102 07:08:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:07.102 07:08:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:07.102 07:08:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:07.102 07:08:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:07.102 07:08:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.102 07:08:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.102 07:08:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:07.102 07:08:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:07.102 07:08:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:07.102 07:08:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:07.102 07:08:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:07.102 07:08:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.102 07:08:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:07.102 07:08:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:07.102 07:08:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:07.102 07:08:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:07.102 07:08:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:07.102 07:08:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:07.102 Cannot find device "nvmf_tgt_br" 00:16:07.102 07:08:51 -- nvmf/common.sh@154 -- # true 00:16:07.102 07:08:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.102 Cannot find device "nvmf_tgt_br2" 00:16:07.102 07:08:51 -- nvmf/common.sh@155 -- # true 00:16:07.102 07:08:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:07.102 07:08:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:07.102 Cannot find device "nvmf_tgt_br" 00:16:07.102 07:08:51 -- nvmf/common.sh@157 -- # true 00:16:07.102 07:08:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:07.102 Cannot find device "nvmf_tgt_br2" 00:16:07.102 07:08:51 -- nvmf/common.sh@158 -- # true 00:16:07.102 07:08:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:07.360 07:08:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:07.360 07:08:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:07.360 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.360 07:08:51 -- nvmf/common.sh@161 -- # true 00:16:07.360 07:08:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:07.360 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.360 07:08:51 -- nvmf/common.sh@162 -- # true 00:16:07.360 07:08:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:07.360 07:08:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:07.360 07:08:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:07.360 07:08:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:07.360 07:08:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:07.360 07:08:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:07.360 07:08:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:07.360 07:08:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:07.360 07:08:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:07.360 07:08:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:07.360 07:08:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:07.360 07:08:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:07.360 07:08:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:07.360 07:08:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:07.360 07:08:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:07.360 07:08:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:07.360 07:08:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:07.360 07:08:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:07.360 07:08:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:07.360 07:08:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:07.360 07:08:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:07.360 07:08:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:07.360 07:08:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:07.360 07:08:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:07.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:07.360 00:16:07.360 --- 10.0.0.2 ping statistics --- 00:16:07.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.360 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:07.360 07:08:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:07.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:07.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:16:07.360 00:16:07.360 --- 10.0.0.3 ping statistics --- 00:16:07.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.360 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:07.360 07:08:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:07.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:16:07.617 00:16:07.617 --- 10.0.0.1 ping statistics --- 00:16:07.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.617 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:07.617 07:08:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.617 07:08:51 -- nvmf/common.sh@421 -- # return 0 00:16:07.617 07:08:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:07.617 07:08:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.617 07:08:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:07.617 07:08:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:07.617 07:08:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.617 07:08:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:07.617 07:08:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:07.617 07:08:51 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:07.617 07:08:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:07.617 07:08:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:07.617 07:08:51 -- common/autotest_common.sh@10 -- # set +x 00:16:07.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.617 07:08:51 -- nvmf/common.sh@469 -- # nvmfpid=76502 00:16:07.617 07:08:51 -- nvmf/common.sh@470 -- # waitforlisten 76502 00:16:07.617 07:08:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:07.617 07:08:51 -- common/autotest_common.sh@819 -- # '[' -z 76502 ']' 00:16:07.617 07:08:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.617 07:08:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:07.617 07:08:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.617 07:08:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:07.617 07:08:51 -- common/autotest_common.sh@10 -- # set +x 00:16:07.617 [2024-07-11 07:08:51.496103] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:07.617 [2024-07-11 07:08:51.496312] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.617 [2024-07-11 07:08:51.628562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.875 [2024-07-11 07:08:51.713371] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:07.876 [2024-07-11 07:08:51.714017] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.876 [2024-07-11 07:08:51.714236] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.876 [2024-07-11 07:08:51.714486] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.876 [2024-07-11 07:08:51.715056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:07.876 [2024-07-11 07:08:51.715297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:07.876 [2024-07-11 07:08:51.715423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:07.876 [2024-07-11 07:08:51.715430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.441 07:08:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:08.441 07:08:52 -- common/autotest_common.sh@852 -- # return 0 00:16:08.441 07:08:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:08.441 07:08:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:08.441 07:08:52 -- common/autotest_common.sh@10 -- # set +x 00:16:08.441 07:08:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.441 07:08:52 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:08.441 07:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.441 07:08:52 -- common/autotest_common.sh@10 -- # set +x 00:16:08.441 [2024-07-11 07:08:52.465707] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.441 07:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.441 07:08:52 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:08.441 07:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.441 07:08:52 -- common/autotest_common.sh@10 -- # set +x 00:16:08.699 Malloc0 00:16:08.699 07:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.699 07:08:52 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:08.699 07:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.699 07:08:52 -- common/autotest_common.sh@10 -- # set +x 00:16:08.699 07:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.699 07:08:52 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:08.699 07:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.699 07:08:52 -- common/autotest_common.sh@10 -- # set +x 00:16:08.699 07:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.699 07:08:52 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.699 07:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.699 07:08:52 -- common/autotest_common.sh@10 -- # set +x 00:16:08.699 [2024-07-11 07:08:52.551811] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.699 07:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.699 07:08:52 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:08.699 07:08:52 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:08.699 07:08:52 -- nvmf/common.sh@520 -- # config=() 00:16:08.699 07:08:52 -- nvmf/common.sh@520 -- # local subsystem config 00:16:08.699 07:08:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:08.699 07:08:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:08.699 { 00:16:08.699 "params": { 00:16:08.699 "name": "Nvme$subsystem", 00:16:08.699 "trtype": "$TEST_TRANSPORT", 00:16:08.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:08.699 "adrfam": "ipv4", 00:16:08.699 "trsvcid": "$NVMF_PORT", 00:16:08.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:08.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:08.699 "hdgst": ${hdgst:-false}, 00:16:08.699 "ddgst": ${ddgst:-false} 00:16:08.699 }, 00:16:08.699 "method": "bdev_nvme_attach_controller" 00:16:08.699 } 00:16:08.699 EOF 00:16:08.699 )") 00:16:08.699 07:08:52 -- nvmf/common.sh@542 -- # cat 00:16:08.699 07:08:52 -- nvmf/common.sh@544 -- # jq . 00:16:08.699 07:08:52 -- nvmf/common.sh@545 -- # IFS=, 00:16:08.699 07:08:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:08.699 "params": { 00:16:08.699 "name": "Nvme1", 00:16:08.699 "trtype": "tcp", 00:16:08.699 "traddr": "10.0.0.2", 00:16:08.699 "adrfam": "ipv4", 00:16:08.699 "trsvcid": "4420", 00:16:08.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:08.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:08.699 "hdgst": false, 00:16:08.699 "ddgst": false 00:16:08.699 }, 00:16:08.699 "method": "bdev_nvme_attach_controller" 00:16:08.699 }' 00:16:08.699 [2024-07-11 07:08:52.616062] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:08.699 [2024-07-11 07:08:52.616146] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76556 ] 00:16:08.699 [2024-07-11 07:08:52.756843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:08.957 [2024-07-11 07:08:52.868279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.957 [2024-07-11 07:08:52.868424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.957 [2024-07-11 07:08:52.868433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.215 [2024-07-11 07:08:53.077906] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:09.215 [2024-07-11 07:08:53.077959] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:09.215 I/O targets: 00:16:09.215 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:09.215 00:16:09.215 00:16:09.215 CUnit - A unit testing framework for C - Version 2.1-3 00:16:09.215 http://cunit.sourceforge.net/ 00:16:09.215 00:16:09.215 00:16:09.215 Suite: bdevio tests on: Nvme1n1 00:16:09.215 Test: blockdev write read block ...passed 00:16:09.215 Test: blockdev write zeroes read block ...passed 00:16:09.215 Test: blockdev write zeroes read no split ...passed 00:16:09.215 Test: blockdev write zeroes read split ...passed 00:16:09.215 Test: blockdev write zeroes read split partial ...passed 00:16:09.215 Test: blockdev reset ...[2024-07-11 07:08:53.198362] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:09.215 [2024-07-11 07:08:53.198756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1120810 (9): Bad file descriptor 00:16:09.215 [2024-07-11 07:08:53.218519] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:09.215 passed 00:16:09.215 Test: blockdev write read 8 blocks ...passed 00:16:09.215 Test: blockdev write read size > 128k ...passed 00:16:09.215 Test: blockdev write read invalid size ...passed 00:16:09.215 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:09.215 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:09.215 Test: blockdev write read max offset ...passed 00:16:09.474 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:09.474 Test: blockdev writev readv 8 blocks ...passed 00:16:09.474 Test: blockdev writev readv 30 x 1block ...passed 00:16:09.474 Test: blockdev writev readv block ...passed 00:16:09.474 Test: blockdev writev readv size > 128k ...passed 00:16:09.474 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:09.474 Test: blockdev comparev and writev ...[2024-07-11 07:08:53.395519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:09.474 [2024-07-11 07:08:53.395700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:09.474 [2024-07-11 07:08:53.395757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:09.474 [2024-07-11 07:08:53.395778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:09.474 [2024-07-11 07:08:53.396193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:09.474 [2024-07-11 07:08:53.396209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:09.474 [2024-07-11 07:08:53.396224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:09.474 [2024-07-11 07:08:53.396248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:09.474 [2024-07-11 07:08:53.396581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:09.474 [2024-07-11 07:08:53.396598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:09.474 [2024-07-11 07:08:53.396629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:09.474 [2024-07-11 07:08:53.396639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:09.474 [2024-07-11 07:08:53.397153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:09.474 [2024-07-11 07:08:53.397185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:09.474 [2024-07-11 07:08:53.397203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:09.474 [2024-07-11 07:08:53.397213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:09.474 passed 00:16:09.474 Test: blockdev nvme passthru rw ...passed 00:16:09.474 Test: blockdev nvme passthru vendor specific ...[2024-07-11 07:08:53.478772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:09.474 [2024-07-11 07:08:53.478800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:09.474 [2024-07-11 07:08:53.479241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:09.474 [2024-07-11 07:08:53.479271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:09.474 [2024-07-11 07:08:53.479441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:09.474 [2024-07-11 07:08:53.479455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:09.474 [2024-07-11 07:08:53.479626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:09.474 [2024-07-11 07:08:53.479642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:09.474 passed 00:16:09.474 Test: blockdev nvme admin passthru ...passed 00:16:09.732 Test: blockdev copy ...passed 00:16:09.732 00:16:09.732 Run Summary: Type Total Ran Passed Failed Inactive 00:16:09.732 suites 1 1 n/a 0 0 00:16:09.732 tests 23 23 23 0 0 00:16:09.732 asserts 152 152 152 0 n/a 00:16:09.732 00:16:09.732 Elapsed time = 0.923 seconds 00:16:09.989 07:08:53 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:09.989 07:08:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.989 07:08:53 -- common/autotest_common.sh@10 -- # set +x 00:16:09.989 07:08:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.989 07:08:53 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:09.989 07:08:53 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:09.989 07:08:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:09.989 07:08:53 -- nvmf/common.sh@116 -- # sync 00:16:09.989 07:08:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:09.989 07:08:53 -- nvmf/common.sh@119 -- # set +e 00:16:09.989 07:08:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:09.989 07:08:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:09.989 rmmod nvme_tcp 00:16:09.989 rmmod nvme_fabrics 00:16:09.989 rmmod nvme_keyring 00:16:09.989 07:08:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:09.989 07:08:53 -- nvmf/common.sh@123 -- # set -e 00:16:09.989 07:08:53 -- nvmf/common.sh@124 -- # return 0 00:16:09.989 07:08:53 -- nvmf/common.sh@477 -- # '[' -n 76502 ']' 00:16:09.989 07:08:53 -- nvmf/common.sh@478 -- # killprocess 76502 00:16:09.989 07:08:53 -- common/autotest_common.sh@926 -- # '[' -z 76502 ']' 00:16:09.989 07:08:53 -- common/autotest_common.sh@930 -- # kill -0 76502 00:16:09.989 07:08:53 -- common/autotest_common.sh@931 -- # uname 00:16:09.989 07:08:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:09.989 07:08:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76502 00:16:09.989 killing process with pid 76502 00:16:09.989 07:08:53 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:09.989 07:08:53 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:09.989 07:08:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76502' 00:16:09.989 07:08:53 -- common/autotest_common.sh@945 -- # kill 76502 00:16:09.989 07:08:53 -- common/autotest_common.sh@950 -- # wait 76502 00:16:10.247 07:08:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:10.247 07:08:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:10.247 07:08:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:10.247 07:08:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.247 07:08:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:10.247 07:08:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.247 07:08:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.247 07:08:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.505 07:08:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:10.505 00:16:10.506 real 0m3.359s 00:16:10.506 user 0m12.284s 00:16:10.506 sys 0m0.852s 00:16:10.506 ************************************ 00:16:10.506 END TEST nvmf_bdevio 00:16:10.506 ************************************ 00:16:10.506 07:08:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.506 07:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:10.506 07:08:54 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:10.506 07:08:54 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:10.506 07:08:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:10.506 07:08:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.506 07:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:10.506 ************************************ 00:16:10.506 START TEST nvmf_bdevio_no_huge 00:16:10.506 ************************************ 00:16:10.506 07:08:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:10.506 * Looking for test storage... 00:16:10.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:10.506 07:08:54 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:10.506 07:08:54 -- nvmf/common.sh@7 -- # uname -s 00:16:10.506 07:08:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.506 07:08:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.506 07:08:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.506 07:08:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.506 07:08:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.506 07:08:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.506 07:08:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.506 07:08:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.506 07:08:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.506 07:08:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.506 07:08:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:16:10.506 07:08:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:16:10.506 07:08:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.506 07:08:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.506 07:08:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:10.506 07:08:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:10.506 07:08:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.506 07:08:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.506 07:08:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.506 07:08:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.506 07:08:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.506 07:08:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.506 07:08:54 -- paths/export.sh@5 -- # export PATH 00:16:10.506 07:08:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.506 07:08:54 -- nvmf/common.sh@46 -- # : 0 00:16:10.506 07:08:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:10.506 07:08:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:10.506 07:08:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:10.506 07:08:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.506 07:08:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.506 07:08:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:10.506 07:08:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:10.506 07:08:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:10.506 07:08:54 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:10.506 07:08:54 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:10.506 07:08:54 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:10.506 07:08:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:10.506 07:08:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.506 07:08:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:10.506 07:08:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:10.506 07:08:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:10.506 07:08:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.506 07:08:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.506 07:08:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.506 07:08:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:10.506 07:08:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:10.506 07:08:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:10.506 07:08:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:10.506 07:08:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:10.506 07:08:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:10.506 07:08:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.506 07:08:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.506 07:08:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:10.506 07:08:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:10.506 07:08:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:10.506 07:08:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:10.506 07:08:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:10.506 07:08:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.506 07:08:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:10.506 07:08:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:10.506 07:08:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:10.506 07:08:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:10.506 07:08:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:10.506 07:08:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:10.506 Cannot find device "nvmf_tgt_br" 00:16:10.506 07:08:54 -- nvmf/common.sh@154 -- # true 00:16:10.506 07:08:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:10.506 Cannot find device "nvmf_tgt_br2" 00:16:10.506 07:08:54 -- nvmf/common.sh@155 -- # true 00:16:10.506 07:08:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:10.506 07:08:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:10.765 Cannot find device "nvmf_tgt_br" 00:16:10.765 07:08:54 -- nvmf/common.sh@157 -- # true 00:16:10.765 07:08:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:10.765 Cannot find device "nvmf_tgt_br2" 00:16:10.765 07:08:54 -- nvmf/common.sh@158 -- # true 00:16:10.765 07:08:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:10.765 07:08:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:10.765 07:08:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:10.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.765 07:08:54 -- nvmf/common.sh@161 -- # true 00:16:10.765 07:08:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:10.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.765 07:08:54 -- nvmf/common.sh@162 -- # true 00:16:10.765 07:08:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:10.765 07:08:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:10.765 07:08:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:10.765 07:08:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:10.766 07:08:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:10.766 07:08:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:10.766 07:08:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:10.766 07:08:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:10.766 07:08:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:10.766 07:08:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:10.766 07:08:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:10.766 07:08:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:10.766 07:08:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:10.766 07:08:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:10.766 07:08:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:10.766 07:08:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:10.766 07:08:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:10.766 07:08:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:11.025 07:08:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:11.025 07:08:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:11.025 07:08:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:11.025 07:08:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:11.025 07:08:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:11.025 07:08:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:11.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:16:11.025 00:16:11.025 --- 10.0.0.2 ping statistics --- 00:16:11.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.025 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:16:11.025 07:08:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:11.025 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:11.025 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:16:11.025 00:16:11.025 --- 10.0.0.3 ping statistics --- 00:16:11.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.025 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:11.025 07:08:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:11.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:11.025 00:16:11.025 --- 10.0.0.1 ping statistics --- 00:16:11.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.025 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:11.025 07:08:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.025 07:08:54 -- nvmf/common.sh@421 -- # return 0 00:16:11.025 07:08:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:11.025 07:08:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.025 07:08:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:11.025 07:08:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:11.025 07:08:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.025 07:08:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:11.025 07:08:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:11.025 07:08:54 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:11.025 07:08:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:11.025 07:08:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:11.025 07:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:11.025 07:08:54 -- nvmf/common.sh@469 -- # nvmfpid=76738 00:16:11.025 07:08:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:11.025 07:08:54 -- nvmf/common.sh@470 -- # waitforlisten 76738 00:16:11.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.025 07:08:54 -- common/autotest_common.sh@819 -- # '[' -z 76738 ']' 00:16:11.025 07:08:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.025 07:08:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:11.025 07:08:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.025 07:08:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:11.025 07:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:11.025 [2024-07-11 07:08:54.987007] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:11.025 [2024-07-11 07:08:54.987098] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:11.284 [2024-07-11 07:08:55.136759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.284 [2024-07-11 07:08:55.246378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:11.284 [2024-07-11 07:08:55.246900] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.284 [2024-07-11 07:08:55.246950] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.284 [2024-07-11 07:08:55.247072] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.284 [2024-07-11 07:08:55.247734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:11.284 [2024-07-11 07:08:55.247908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:11.284 [2024-07-11 07:08:55.248057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:11.284 [2024-07-11 07:08:55.248072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:12.221 07:08:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:12.221 07:08:55 -- common/autotest_common.sh@852 -- # return 0 00:16:12.221 07:08:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:12.221 07:08:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:12.221 07:08:55 -- common/autotest_common.sh@10 -- # set +x 00:16:12.221 07:08:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.221 07:08:56 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:12.221 07:08:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.221 07:08:56 -- common/autotest_common.sh@10 -- # set +x 00:16:12.221 [2024-07-11 07:08:56.017662] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.221 07:08:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.221 07:08:56 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:12.221 07:08:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.221 07:08:56 -- common/autotest_common.sh@10 -- # set +x 00:16:12.221 Malloc0 00:16:12.221 07:08:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.221 07:08:56 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:12.221 07:08:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.221 07:08:56 -- common/autotest_common.sh@10 -- # set +x 00:16:12.221 07:08:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.221 07:08:56 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:12.221 07:08:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.221 07:08:56 -- common/autotest_common.sh@10 -- # set +x 00:16:12.221 07:08:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.221 07:08:56 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.221 07:08:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.221 07:08:56 -- common/autotest_common.sh@10 -- # set +x 00:16:12.221 [2024-07-11 07:08:56.057898] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.221 07:08:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.221 07:08:56 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:12.221 07:08:56 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:12.221 07:08:56 -- nvmf/common.sh@520 -- # config=() 00:16:12.221 07:08:56 -- nvmf/common.sh@520 -- # local subsystem config 00:16:12.221 07:08:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:12.221 07:08:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:12.221 { 00:16:12.221 "params": { 00:16:12.221 "name": "Nvme$subsystem", 00:16:12.221 "trtype": "$TEST_TRANSPORT", 00:16:12.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:12.221 "adrfam": "ipv4", 00:16:12.221 "trsvcid": "$NVMF_PORT", 00:16:12.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:12.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:12.221 "hdgst": ${hdgst:-false}, 00:16:12.221 "ddgst": ${ddgst:-false} 00:16:12.221 }, 00:16:12.221 "method": "bdev_nvme_attach_controller" 00:16:12.221 } 00:16:12.221 EOF 00:16:12.221 )") 00:16:12.221 07:08:56 -- nvmf/common.sh@542 -- # cat 00:16:12.221 07:08:56 -- nvmf/common.sh@544 -- # jq . 00:16:12.221 07:08:56 -- nvmf/common.sh@545 -- # IFS=, 00:16:12.221 07:08:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:12.221 "params": { 00:16:12.221 "name": "Nvme1", 00:16:12.221 "trtype": "tcp", 00:16:12.221 "traddr": "10.0.0.2", 00:16:12.221 "adrfam": "ipv4", 00:16:12.221 "trsvcid": "4420", 00:16:12.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:12.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:12.221 "hdgst": false, 00:16:12.221 "ddgst": false 00:16:12.221 }, 00:16:12.221 "method": "bdev_nvme_attach_controller" 00:16:12.221 }' 00:16:12.221 [2024-07-11 07:08:56.122633] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:12.221 [2024-07-11 07:08:56.122731] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76799 ] 00:16:12.221 [2024-07-11 07:08:56.273871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:12.481 [2024-07-11 07:08:56.430235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.481 [2024-07-11 07:08:56.430376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.481 [2024-07-11 07:08:56.430389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.740 [2024-07-11 07:08:56.620278] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:12.740 [2024-07-11 07:08:56.620315] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:12.740 I/O targets: 00:16:12.740 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:12.740 00:16:12.740 00:16:12.740 CUnit - A unit testing framework for C - Version 2.1-3 00:16:12.740 http://cunit.sourceforge.net/ 00:16:12.740 00:16:12.740 00:16:12.740 Suite: bdevio tests on: Nvme1n1 00:16:12.740 Test: blockdev write read block ...passed 00:16:12.740 Test: blockdev write zeroes read block ...passed 00:16:12.740 Test: blockdev write zeroes read no split ...passed 00:16:12.740 Test: blockdev write zeroes read split ...passed 00:16:12.740 Test: blockdev write zeroes read split partial ...passed 00:16:12.740 Test: blockdev reset ...[2024-07-11 07:08:56.755383] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:12.740 [2024-07-11 07:08:56.755508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66cba0 (9): Bad file descriptor 00:16:12.740 [2024-07-11 07:08:56.769177] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:12.740 passed 00:16:12.740 Test: blockdev write read 8 blocks ...passed 00:16:12.740 Test: blockdev write read size > 128k ...passed 00:16:12.740 Test: blockdev write read invalid size ...passed 00:16:13.000 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:13.000 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:13.000 Test: blockdev write read max offset ...passed 00:16:13.000 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:13.000 Test: blockdev writev readv 8 blocks ...passed 00:16:13.000 Test: blockdev writev readv 30 x 1block ...passed 00:16:13.000 Test: blockdev writev readv block ...passed 00:16:13.000 Test: blockdev writev readv size > 128k ...passed 00:16:13.000 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:13.000 Test: blockdev comparev and writev ...[2024-07-11 07:08:56.944691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.000 [2024-07-11 07:08:56.944724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:13.000 [2024-07-11 07:08:56.944742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.000 [2024-07-11 07:08:56.944753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:13.000 [2024-07-11 07:08:56.945416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.000 [2024-07-11 07:08:56.945463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:13.000 [2024-07-11 07:08:56.945505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.000 [2024-07-11 07:08:56.945525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:13.000 [2024-07-11 07:08:56.946139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.000 [2024-07-11 07:08:56.946172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:13.000 [2024-07-11 07:08:56.946199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.000 [2024-07-11 07:08:56.946209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:13.000 [2024-07-11 07:08:56.946718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.000 [2024-07-11 07:08:56.946744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:13.000 [2024-07-11 07:08:56.946774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.000 [2024-07-11 07:08:56.946786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:13.000 passed 00:16:13.000 Test: blockdev nvme passthru rw ...passed 00:16:13.000 Test: blockdev nvme passthru vendor specific ...[2024-07-11 07:08:57.028961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:13.000 [2024-07-11 07:08:57.028991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:13.000 [2024-07-11 07:08:57.029132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:13.000 [2024-07-11 07:08:57.029155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:13.000 [2024-07-11 07:08:57.029278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:13.000 [2024-07-11 07:08:57.029300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:13.000 [2024-07-11 07:08:57.029408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:13.000 [2024-07-11 07:08:57.029430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:13.000 passed 00:16:13.000 Test: blockdev nvme admin passthru ...passed 00:16:13.259 Test: blockdev copy ...passed 00:16:13.259 00:16:13.259 Run Summary: Type Total Ran Passed Failed Inactive 00:16:13.259 suites 1 1 n/a 0 0 00:16:13.259 tests 23 23 23 0 0 00:16:13.259 asserts 152 152 152 0 n/a 00:16:13.259 00:16:13.259 Elapsed time = 0.942 seconds 00:16:13.518 07:08:57 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.518 07:08:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.518 07:08:57 -- common/autotest_common.sh@10 -- # set +x 00:16:13.776 07:08:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.776 07:08:57 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:13.776 07:08:57 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:13.776 07:08:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:13.776 07:08:57 -- nvmf/common.sh@116 -- # sync 00:16:13.776 07:08:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:13.776 07:08:57 -- nvmf/common.sh@119 -- # set +e 00:16:13.776 07:08:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:13.776 07:08:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:13.776 rmmod nvme_tcp 00:16:13.776 rmmod nvme_fabrics 00:16:13.776 rmmod nvme_keyring 00:16:13.776 07:08:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:13.776 07:08:57 -- nvmf/common.sh@123 -- # set -e 00:16:13.776 07:08:57 -- nvmf/common.sh@124 -- # return 0 00:16:13.777 07:08:57 -- nvmf/common.sh@477 -- # '[' -n 76738 ']' 00:16:13.777 07:08:57 -- nvmf/common.sh@478 -- # killprocess 76738 00:16:13.777 07:08:57 -- common/autotest_common.sh@926 -- # '[' -z 76738 ']' 00:16:13.777 07:08:57 -- common/autotest_common.sh@930 -- # kill -0 76738 00:16:13.777 07:08:57 -- common/autotest_common.sh@931 -- # uname 00:16:13.777 07:08:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:13.777 07:08:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76738 00:16:13.777 07:08:57 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:13.777 07:08:57 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:13.777 killing process with pid 76738 00:16:13.777 07:08:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76738' 00:16:13.777 07:08:57 -- common/autotest_common.sh@945 -- # kill 76738 00:16:13.777 07:08:57 -- common/autotest_common.sh@950 -- # wait 76738 00:16:14.345 07:08:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:14.345 07:08:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:14.345 07:08:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:14.345 07:08:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.345 07:08:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:14.345 07:08:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.345 07:08:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.345 07:08:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.345 07:08:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:14.345 ************************************ 00:16:14.345 END TEST nvmf_bdevio_no_huge 00:16:14.345 ************************************ 00:16:14.345 00:16:14.345 real 0m3.851s 00:16:14.345 user 0m13.714s 00:16:14.345 sys 0m1.377s 00:16:14.345 07:08:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.345 07:08:58 -- common/autotest_common.sh@10 -- # set +x 00:16:14.345 07:08:58 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:14.345 07:08:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:14.345 07:08:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:14.345 07:08:58 -- common/autotest_common.sh@10 -- # set +x 00:16:14.345 ************************************ 00:16:14.345 START TEST nvmf_tls 00:16:14.345 ************************************ 00:16:14.345 07:08:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:14.345 * Looking for test storage... 00:16:14.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:14.345 07:08:58 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:14.345 07:08:58 -- nvmf/common.sh@7 -- # uname -s 00:16:14.345 07:08:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.345 07:08:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.345 07:08:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.345 07:08:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.345 07:08:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.345 07:08:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.345 07:08:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.345 07:08:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.345 07:08:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.345 07:08:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.345 07:08:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:16:14.345 07:08:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:16:14.345 07:08:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.345 07:08:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.345 07:08:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:14.345 07:08:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.345 07:08:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.345 07:08:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.345 07:08:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.345 07:08:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.345 07:08:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.345 07:08:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.345 07:08:58 -- paths/export.sh@5 -- # export PATH 00:16:14.345 07:08:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.345 07:08:58 -- nvmf/common.sh@46 -- # : 0 00:16:14.345 07:08:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:14.345 07:08:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:14.345 07:08:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:14.345 07:08:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.345 07:08:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.345 07:08:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:14.345 07:08:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:14.345 07:08:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:14.345 07:08:58 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.345 07:08:58 -- target/tls.sh@71 -- # nvmftestinit 00:16:14.345 07:08:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:14.345 07:08:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.345 07:08:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:14.345 07:08:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:14.345 07:08:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:14.345 07:08:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.345 07:08:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.345 07:08:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.345 07:08:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:14.345 07:08:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:14.345 07:08:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:14.345 07:08:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:14.345 07:08:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:14.345 07:08:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:14.345 07:08:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.345 07:08:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.345 07:08:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:14.345 07:08:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:14.345 07:08:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:14.345 07:08:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:14.345 07:08:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:14.345 07:08:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.345 07:08:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:14.345 07:08:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:14.345 07:08:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:14.345 07:08:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:14.345 07:08:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:14.606 07:08:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:14.606 Cannot find device "nvmf_tgt_br" 00:16:14.606 07:08:58 -- nvmf/common.sh@154 -- # true 00:16:14.606 07:08:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:14.606 Cannot find device "nvmf_tgt_br2" 00:16:14.606 07:08:58 -- nvmf/common.sh@155 -- # true 00:16:14.606 07:08:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:14.606 07:08:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:14.606 Cannot find device "nvmf_tgt_br" 00:16:14.606 07:08:58 -- nvmf/common.sh@157 -- # true 00:16:14.606 07:08:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:14.606 Cannot find device "nvmf_tgt_br2" 00:16:14.606 07:08:58 -- nvmf/common.sh@158 -- # true 00:16:14.606 07:08:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:14.606 07:08:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:14.606 07:08:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:14.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.606 07:08:58 -- nvmf/common.sh@161 -- # true 00:16:14.606 07:08:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:14.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.606 07:08:58 -- nvmf/common.sh@162 -- # true 00:16:14.606 07:08:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:14.606 07:08:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:14.606 07:08:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:14.606 07:08:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:14.606 07:08:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:14.606 07:08:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:14.606 07:08:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:14.606 07:08:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:14.606 07:08:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:14.606 07:08:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:14.606 07:08:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:14.606 07:08:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:14.606 07:08:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:14.606 07:08:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:14.606 07:08:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:14.606 07:08:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:14.606 07:08:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:14.606 07:08:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:14.606 07:08:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:14.606 07:08:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:14.606 07:08:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:14.887 07:08:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:14.887 07:08:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:14.887 07:08:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:14.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:16:14.887 00:16:14.887 --- 10.0.0.2 ping statistics --- 00:16:14.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.887 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:14.887 07:08:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:14.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:14.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:16:14.887 00:16:14.887 --- 10.0.0.3 ping statistics --- 00:16:14.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.887 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:14.887 07:08:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:14.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:14.887 00:16:14.887 --- 10.0.0.1 ping statistics --- 00:16:14.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.887 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:14.887 07:08:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.887 07:08:58 -- nvmf/common.sh@421 -- # return 0 00:16:14.887 07:08:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:14.887 07:08:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.887 07:08:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:14.887 07:08:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:14.887 07:08:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.887 07:08:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:14.887 07:08:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:14.887 07:08:58 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:14.887 07:08:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:14.888 07:08:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:14.888 07:08:58 -- common/autotest_common.sh@10 -- # set +x 00:16:14.888 07:08:58 -- nvmf/common.sh@469 -- # nvmfpid=76982 00:16:14.888 07:08:58 -- nvmf/common.sh@470 -- # waitforlisten 76982 00:16:14.888 07:08:58 -- common/autotest_common.sh@819 -- # '[' -z 76982 ']' 00:16:14.888 07:08:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.888 07:08:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:14.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.888 07:08:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.888 07:08:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:14.888 07:08:58 -- common/autotest_common.sh@10 -- # set +x 00:16:14.888 07:08:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:14.888 [2024-07-11 07:08:58.784644] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:14.888 [2024-07-11 07:08:58.784726] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.888 [2024-07-11 07:08:58.927779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.155 [2024-07-11 07:08:59.036338] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:15.155 [2024-07-11 07:08:59.036525] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.155 [2024-07-11 07:08:59.036544] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.155 [2024-07-11 07:08:59.036556] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.155 [2024-07-11 07:08:59.036595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.722 07:08:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:15.722 07:08:59 -- common/autotest_common.sh@852 -- # return 0 00:16:15.722 07:08:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:15.722 07:08:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:15.722 07:08:59 -- common/autotest_common.sh@10 -- # set +x 00:16:15.722 07:08:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.722 07:08:59 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:15.722 07:08:59 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:15.981 true 00:16:15.981 07:08:59 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:15.981 07:08:59 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:16.239 07:09:00 -- target/tls.sh@82 -- # version=0 00:16:16.239 07:09:00 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:16.239 07:09:00 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:16.498 07:09:00 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:16.498 07:09:00 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:16.756 07:09:00 -- target/tls.sh@90 -- # version=13 00:16:16.756 07:09:00 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:16.756 07:09:00 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:17.015 07:09:00 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:17.015 07:09:00 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:17.273 07:09:01 -- target/tls.sh@98 -- # version=7 00:16:17.273 07:09:01 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:17.273 07:09:01 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:17.273 07:09:01 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:17.532 07:09:01 -- target/tls.sh@105 -- # ktls=false 00:16:17.532 07:09:01 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:17.532 07:09:01 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:17.790 07:09:01 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:17.790 07:09:01 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:18.047 07:09:02 -- target/tls.sh@113 -- # ktls=true 00:16:18.048 07:09:02 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:18.048 07:09:02 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:18.305 07:09:02 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:18.305 07:09:02 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:18.563 07:09:02 -- target/tls.sh@121 -- # ktls=false 00:16:18.563 07:09:02 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:18.563 07:09:02 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:18.563 07:09:02 -- target/tls.sh@49 -- # local key hash crc 00:16:18.563 07:09:02 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:18.563 07:09:02 -- target/tls.sh@51 -- # hash=01 00:16:18.563 07:09:02 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:18.563 07:09:02 -- target/tls.sh@52 -- # gzip -1 -c 00:16:18.563 07:09:02 -- target/tls.sh@52 -- # tail -c8 00:16:18.563 07:09:02 -- target/tls.sh@52 -- # head -c 4 00:16:18.563 07:09:02 -- target/tls.sh@52 -- # crc='p$H�' 00:16:18.563 07:09:02 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:18.563 07:09:02 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:18.563 07:09:02 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:18.563 07:09:02 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:18.563 07:09:02 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:18.564 07:09:02 -- target/tls.sh@49 -- # local key hash crc 00:16:18.564 07:09:02 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:18.564 07:09:02 -- target/tls.sh@51 -- # hash=01 00:16:18.564 07:09:02 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:18.564 07:09:02 -- target/tls.sh@52 -- # gzip -1 -c 00:16:18.564 07:09:02 -- target/tls.sh@52 -- # tail -c8 00:16:18.564 07:09:02 -- target/tls.sh@52 -- # head -c 4 00:16:18.564 07:09:02 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:18.564 07:09:02 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:18.564 07:09:02 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:18.564 07:09:02 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:18.564 07:09:02 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:18.564 07:09:02 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:18.564 07:09:02 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:18.564 07:09:02 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:18.564 07:09:02 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:18.564 07:09:02 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:18.564 07:09:02 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:18.564 07:09:02 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:18.822 07:09:02 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:19.080 07:09:03 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:19.080 07:09:03 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:19.080 07:09:03 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:19.338 [2024-07-11 07:09:03.303698] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.338 07:09:03 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:19.597 07:09:03 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:19.855 [2024-07-11 07:09:03.815787] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:19.855 [2024-07-11 07:09:03.815985] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.855 07:09:03 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:20.114 malloc0 00:16:20.114 07:09:04 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:20.371 07:09:04 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:20.629 07:09:04 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:32.828 Initializing NVMe Controllers 00:16:32.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:32.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:32.828 Initialization complete. Launching workers. 00:16:32.828 ======================================================== 00:16:32.828 Latency(us) 00:16:32.828 Device Information : IOPS MiB/s Average min max 00:16:32.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10762.49 42.04 5953.20 1515.85 48257.18 00:16:32.828 ======================================================== 00:16:32.828 Total : 10762.49 42.04 5953.20 1515.85 48257.18 00:16:32.828 00:16:32.828 07:09:14 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:32.828 07:09:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:32.828 07:09:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:32.828 07:09:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:32.828 07:09:14 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:32.828 07:09:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:32.828 07:09:14 -- target/tls.sh@28 -- # bdevperf_pid=77353 00:16:32.828 07:09:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:32.828 07:09:14 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:32.828 07:09:14 -- target/tls.sh@31 -- # waitforlisten 77353 /var/tmp/bdevperf.sock 00:16:32.828 07:09:14 -- common/autotest_common.sh@819 -- # '[' -z 77353 ']' 00:16:32.828 07:09:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.828 07:09:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:32.828 07:09:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.828 07:09:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:32.828 07:09:14 -- common/autotest_common.sh@10 -- # set +x 00:16:32.828 [2024-07-11 07:09:14.762414] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:32.828 [2024-07-11 07:09:14.762526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77353 ] 00:16:32.828 [2024-07-11 07:09:14.899342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.828 [2024-07-11 07:09:14.971561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.828 07:09:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:32.828 07:09:15 -- common/autotest_common.sh@852 -- # return 0 00:16:32.828 07:09:15 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:32.828 [2024-07-11 07:09:15.795935] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:32.828 TLSTESTn1 00:16:32.828 07:09:15 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:32.828 Running I/O for 10 seconds... 00:16:42.810 00:16:42.810 Latency(us) 00:16:42.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.810 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:42.810 Verification LBA range: start 0x0 length 0x2000 00:16:42.810 TLSTESTn1 : 10.01 5050.26 19.73 0.00 0.00 25314.93 2815.07 24665.37 00:16:42.810 =================================================================================================================== 00:16:42.810 Total : 5050.26 19.73 0.00 0.00 25314.93 2815.07 24665.37 00:16:42.810 0 00:16:42.810 07:09:26 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.810 07:09:26 -- target/tls.sh@45 -- # killprocess 77353 00:16:42.810 07:09:26 -- common/autotest_common.sh@926 -- # '[' -z 77353 ']' 00:16:42.810 07:09:26 -- common/autotest_common.sh@930 -- # kill -0 77353 00:16:42.810 07:09:26 -- common/autotest_common.sh@931 -- # uname 00:16:42.811 07:09:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:42.811 07:09:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77353 00:16:42.811 killing process with pid 77353 00:16:42.811 Received shutdown signal, test time was about 10.000000 seconds 00:16:42.811 00:16:42.811 Latency(us) 00:16:42.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.811 =================================================================================================================== 00:16:42.811 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.811 07:09:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:42.811 07:09:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:42.811 07:09:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77353' 00:16:42.811 07:09:26 -- common/autotest_common.sh@945 -- # kill 77353 00:16:42.811 07:09:26 -- common/autotest_common.sh@950 -- # wait 77353 00:16:42.811 07:09:26 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:42.811 07:09:26 -- common/autotest_common.sh@640 -- # local es=0 00:16:42.811 07:09:26 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:42.811 07:09:26 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:42.811 07:09:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:42.811 07:09:26 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:42.811 07:09:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:42.811 07:09:26 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:42.811 07:09:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:42.811 07:09:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:42.811 07:09:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:42.811 07:09:26 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:16:42.811 07:09:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:42.811 07:09:26 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:42.811 07:09:26 -- target/tls.sh@28 -- # bdevperf_pid=77501 00:16:42.811 07:09:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:42.811 07:09:26 -- target/tls.sh@31 -- # waitforlisten 77501 /var/tmp/bdevperf.sock 00:16:42.811 07:09:26 -- common/autotest_common.sh@819 -- # '[' -z 77501 ']' 00:16:42.811 07:09:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:42.811 07:09:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:42.811 07:09:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:42.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:42.811 07:09:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:42.811 07:09:26 -- common/autotest_common.sh@10 -- # set +x 00:16:42.811 [2024-07-11 07:09:26.406181] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:42.811 [2024-07-11 07:09:26.406514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77501 ] 00:16:42.811 [2024-07-11 07:09:26.536569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.811 [2024-07-11 07:09:26.621199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.377 07:09:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:43.377 07:09:27 -- common/autotest_common.sh@852 -- # return 0 00:16:43.377 07:09:27 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:43.638 [2024-07-11 07:09:27.527556] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:43.638 [2024-07-11 07:09:27.536201] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:43.638 [2024-07-11 07:09:27.537133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe2570 (107): Transport endpoint is not connected 00:16:43.638 [2024-07-11 07:09:27.538117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe2570 (9): Bad file descriptor 00:16:43.638 [2024-07-11 07:09:27.539113] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:43.638 [2024-07-11 07:09:27.539138] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:43.638 [2024-07-11 07:09:27.539150] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:43.638 2024/07/11 07:09:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:43.638 request: 00:16:43.638 { 00:16:43.638 "method": "bdev_nvme_attach_controller", 00:16:43.638 "params": { 00:16:43.638 "name": "TLSTEST", 00:16:43.638 "trtype": "tcp", 00:16:43.638 "traddr": "10.0.0.2", 00:16:43.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:43.638 "adrfam": "ipv4", 00:16:43.638 "trsvcid": "4420", 00:16:43.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:43.638 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:16:43.638 } 00:16:43.638 } 00:16:43.638 Got JSON-RPC error response 00:16:43.638 GoRPCClient: error on JSON-RPC call 00:16:43.638 07:09:27 -- target/tls.sh@36 -- # killprocess 77501 00:16:43.638 07:09:27 -- common/autotest_common.sh@926 -- # '[' -z 77501 ']' 00:16:43.638 07:09:27 -- common/autotest_common.sh@930 -- # kill -0 77501 00:16:43.638 07:09:27 -- common/autotest_common.sh@931 -- # uname 00:16:43.638 07:09:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:43.638 07:09:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77501 00:16:43.638 killing process with pid 77501 00:16:43.638 Received shutdown signal, test time was about 10.000000 seconds 00:16:43.638 00:16:43.638 Latency(us) 00:16:43.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.638 =================================================================================================================== 00:16:43.638 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:43.638 07:09:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:43.638 07:09:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:43.638 07:09:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77501' 00:16:43.638 07:09:27 -- common/autotest_common.sh@945 -- # kill 77501 00:16:43.638 07:09:27 -- common/autotest_common.sh@950 -- # wait 77501 00:16:43.897 07:09:27 -- target/tls.sh@37 -- # return 1 00:16:43.897 07:09:27 -- common/autotest_common.sh@643 -- # es=1 00:16:43.897 07:09:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:43.897 07:09:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:43.897 07:09:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:43.897 07:09:27 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:43.897 07:09:27 -- common/autotest_common.sh@640 -- # local es=0 00:16:43.897 07:09:27 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:43.897 07:09:27 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:43.897 07:09:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:43.897 07:09:27 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:43.897 07:09:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:43.897 07:09:27 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:43.897 07:09:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:43.897 07:09:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:43.897 07:09:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:43.897 07:09:27 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:43.897 07:09:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:43.897 07:09:27 -- target/tls.sh@28 -- # bdevperf_pid=77547 00:16:43.897 07:09:27 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:43.897 07:09:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:43.897 07:09:27 -- target/tls.sh@31 -- # waitforlisten 77547 /var/tmp/bdevperf.sock 00:16:43.897 07:09:27 -- common/autotest_common.sh@819 -- # '[' -z 77547 ']' 00:16:43.897 07:09:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:43.897 07:09:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:43.897 07:09:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:43.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:43.897 07:09:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:43.897 07:09:27 -- common/autotest_common.sh@10 -- # set +x 00:16:43.897 [2024-07-11 07:09:27.916842] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:43.897 [2024-07-11 07:09:27.917066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77547 ] 00:16:44.156 [2024-07-11 07:09:28.044685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.156 [2024-07-11 07:09:28.130184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.091 07:09:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:45.091 07:09:28 -- common/autotest_common.sh@852 -- # return 0 00:16:45.091 07:09:28 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:45.091 [2024-07-11 07:09:29.096251] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:45.091 [2024-07-11 07:09:29.106180] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:45.091 [2024-07-11 07:09:29.106218] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:45.091 [2024-07-11 07:09:29.106339] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:45.091 [2024-07-11 07:09:29.106849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198c570 (107): Transport endpoint is not connected 00:16:45.091 [2024-07-11 07:09:29.107825] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198c570 (9): Bad file descriptor 00:16:45.091 [2024-07-11 07:09:29.108821] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:45.091 [2024-07-11 07:09:29.108846] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:45.091 [2024-07-11 07:09:29.108860] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:45.091 2024/07/11 07:09:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:45.091 request: 00:16:45.091 { 00:16:45.091 "method": "bdev_nvme_attach_controller", 00:16:45.091 "params": { 00:16:45.091 "name": "TLSTEST", 00:16:45.091 "trtype": "tcp", 00:16:45.091 "traddr": "10.0.0.2", 00:16:45.091 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:45.091 "adrfam": "ipv4", 00:16:45.091 "trsvcid": "4420", 00:16:45.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.091 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:45.091 } 00:16:45.091 } 00:16:45.091 Got JSON-RPC error response 00:16:45.091 GoRPCClient: error on JSON-RPC call 00:16:45.091 07:09:29 -- target/tls.sh@36 -- # killprocess 77547 00:16:45.091 07:09:29 -- common/autotest_common.sh@926 -- # '[' -z 77547 ']' 00:16:45.091 07:09:29 -- common/autotest_common.sh@930 -- # kill -0 77547 00:16:45.091 07:09:29 -- common/autotest_common.sh@931 -- # uname 00:16:45.091 07:09:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:45.091 07:09:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77547 00:16:45.350 killing process with pid 77547 00:16:45.350 Received shutdown signal, test time was about 10.000000 seconds 00:16:45.350 00:16:45.350 Latency(us) 00:16:45.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.350 =================================================================================================================== 00:16:45.350 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:45.350 07:09:29 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:45.350 07:09:29 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:45.350 07:09:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77547' 00:16:45.350 07:09:29 -- common/autotest_common.sh@945 -- # kill 77547 00:16:45.350 07:09:29 -- common/autotest_common.sh@950 -- # wait 77547 00:16:45.608 07:09:29 -- target/tls.sh@37 -- # return 1 00:16:45.608 07:09:29 -- common/autotest_common.sh@643 -- # es=1 00:16:45.608 07:09:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:45.608 07:09:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:45.608 07:09:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:45.608 07:09:29 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:45.608 07:09:29 -- common/autotest_common.sh@640 -- # local es=0 00:16:45.608 07:09:29 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:45.608 07:09:29 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:45.608 07:09:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:45.608 07:09:29 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:45.608 07:09:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:45.608 07:09:29 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:45.608 07:09:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:45.608 07:09:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:45.608 07:09:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:45.608 07:09:29 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:45.608 07:09:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:45.608 07:09:29 -- target/tls.sh@28 -- # bdevperf_pid=77587 00:16:45.608 07:09:29 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:45.608 07:09:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:45.608 07:09:29 -- target/tls.sh@31 -- # waitforlisten 77587 /var/tmp/bdevperf.sock 00:16:45.608 07:09:29 -- common/autotest_common.sh@819 -- # '[' -z 77587 ']' 00:16:45.608 07:09:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:45.608 07:09:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:45.608 07:09:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:45.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:45.608 07:09:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:45.608 07:09:29 -- common/autotest_common.sh@10 -- # set +x 00:16:45.608 [2024-07-11 07:09:29.501786] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:45.608 [2024-07-11 07:09:29.502075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77587 ] 00:16:45.608 [2024-07-11 07:09:29.639705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.867 [2024-07-11 07:09:29.744556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.433 07:09:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:46.433 07:09:30 -- common/autotest_common.sh@852 -- # return 0 00:16:46.433 07:09:30 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.692 [2024-07-11 07:09:30.649558] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:46.692 [2024-07-11 07:09:30.656384] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:46.692 [2024-07-11 07:09:30.656420] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:46.692 [2024-07-11 07:09:30.656530] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:46.692 [2024-07-11 07:09:30.656954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76c570 (107): Transport endpoint is not connected 00:16:46.692 [2024-07-11 07:09:30.657941] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76c570 (9): Bad file descriptor 00:16:46.692 [2024-07-11 07:09:30.658938] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:46.692 [2024-07-11 07:09:30.658958] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:46.692 [2024-07-11 07:09:30.658977] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:46.692 2024/07/11 07:09:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:46.692 request: 00:16:46.692 { 00:16:46.692 "method": "bdev_nvme_attach_controller", 00:16:46.692 "params": { 00:16:46.692 "name": "TLSTEST", 00:16:46.692 "trtype": "tcp", 00:16:46.692 "traddr": "10.0.0.2", 00:16:46.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:46.692 "adrfam": "ipv4", 00:16:46.692 "trsvcid": "4420", 00:16:46.692 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:46.692 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:46.692 } 00:16:46.692 } 00:16:46.692 Got JSON-RPC error response 00:16:46.692 GoRPCClient: error on JSON-RPC call 00:16:46.692 07:09:30 -- target/tls.sh@36 -- # killprocess 77587 00:16:46.692 07:09:30 -- common/autotest_common.sh@926 -- # '[' -z 77587 ']' 00:16:46.692 07:09:30 -- common/autotest_common.sh@930 -- # kill -0 77587 00:16:46.692 07:09:30 -- common/autotest_common.sh@931 -- # uname 00:16:46.692 07:09:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:46.692 07:09:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77587 00:16:46.692 07:09:30 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:46.692 07:09:30 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:46.692 killing process with pid 77587 00:16:46.692 07:09:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77587' 00:16:46.692 07:09:30 -- common/autotest_common.sh@945 -- # kill 77587 00:16:46.692 Received shutdown signal, test time was about 10.000000 seconds 00:16:46.692 00:16:46.692 Latency(us) 00:16:46.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.692 =================================================================================================================== 00:16:46.692 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:46.692 07:09:30 -- common/autotest_common.sh@950 -- # wait 77587 00:16:46.951 07:09:30 -- target/tls.sh@37 -- # return 1 00:16:46.951 07:09:30 -- common/autotest_common.sh@643 -- # es=1 00:16:46.951 07:09:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:46.951 07:09:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:46.951 07:09:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:46.951 07:09:30 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:46.951 07:09:30 -- common/autotest_common.sh@640 -- # local es=0 00:16:46.951 07:09:30 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:46.951 07:09:30 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:46.951 07:09:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:46.951 07:09:30 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:46.951 07:09:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:46.951 07:09:30 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:46.951 07:09:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:46.951 07:09:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:46.951 07:09:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:46.951 07:09:30 -- target/tls.sh@23 -- # psk= 00:16:46.951 07:09:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:46.951 07:09:31 -- target/tls.sh@28 -- # bdevperf_pid=77638 00:16:46.951 07:09:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.951 07:09:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:46.951 07:09:31 -- target/tls.sh@31 -- # waitforlisten 77638 /var/tmp/bdevperf.sock 00:16:46.951 07:09:31 -- common/autotest_common.sh@819 -- # '[' -z 77638 ']' 00:16:46.951 07:09:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:46.951 07:09:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:46.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:46.951 07:09:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:46.951 07:09:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:46.951 07:09:31 -- common/autotest_common.sh@10 -- # set +x 00:16:47.210 [2024-07-11 07:09:31.057915] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:47.210 [2024-07-11 07:09:31.058013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77638 ] 00:16:47.210 [2024-07-11 07:09:31.195534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.468 [2024-07-11 07:09:31.271713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.033 07:09:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:48.033 07:09:31 -- common/autotest_common.sh@852 -- # return 0 00:16:48.033 07:09:31 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:48.291 [2024-07-11 07:09:32.126637] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:48.291 [2024-07-11 07:09:32.128383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1925170 (9): Bad file descriptor 00:16:48.291 [2024-07-11 07:09:32.129378] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:48.291 [2024-07-11 07:09:32.129399] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:48.291 [2024-07-11 07:09:32.129414] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:48.291 2024/07/11 07:09:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:48.291 request: 00:16:48.291 { 00:16:48.291 "method": "bdev_nvme_attach_controller", 00:16:48.291 "params": { 00:16:48.291 "name": "TLSTEST", 00:16:48.291 "trtype": "tcp", 00:16:48.291 "traddr": "10.0.0.2", 00:16:48.291 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:48.291 "adrfam": "ipv4", 00:16:48.291 "trsvcid": "4420", 00:16:48.291 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:16:48.291 } 00:16:48.291 } 00:16:48.291 Got JSON-RPC error response 00:16:48.291 GoRPCClient: error on JSON-RPC call 00:16:48.291 07:09:32 -- target/tls.sh@36 -- # killprocess 77638 00:16:48.291 07:09:32 -- common/autotest_common.sh@926 -- # '[' -z 77638 ']' 00:16:48.291 07:09:32 -- common/autotest_common.sh@930 -- # kill -0 77638 00:16:48.291 07:09:32 -- common/autotest_common.sh@931 -- # uname 00:16:48.291 07:09:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:48.291 07:09:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77638 00:16:48.291 07:09:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:48.291 07:09:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:48.291 killing process with pid 77638 00:16:48.291 07:09:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77638' 00:16:48.291 07:09:32 -- common/autotest_common.sh@945 -- # kill 77638 00:16:48.291 Received shutdown signal, test time was about 10.000000 seconds 00:16:48.291 00:16:48.291 Latency(us) 00:16:48.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.291 =================================================================================================================== 00:16:48.291 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:48.291 07:09:32 -- common/autotest_common.sh@950 -- # wait 77638 00:16:48.550 07:09:32 -- target/tls.sh@37 -- # return 1 00:16:48.550 07:09:32 -- common/autotest_common.sh@643 -- # es=1 00:16:48.550 07:09:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:48.550 07:09:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:48.550 07:09:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:48.550 07:09:32 -- target/tls.sh@167 -- # killprocess 76982 00:16:48.550 07:09:32 -- common/autotest_common.sh@926 -- # '[' -z 76982 ']' 00:16:48.550 07:09:32 -- common/autotest_common.sh@930 -- # kill -0 76982 00:16:48.550 07:09:32 -- common/autotest_common.sh@931 -- # uname 00:16:48.550 07:09:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:48.550 07:09:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76982 00:16:48.550 07:09:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:48.550 07:09:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:48.550 killing process with pid 76982 00:16:48.550 07:09:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76982' 00:16:48.550 07:09:32 -- common/autotest_common.sh@945 -- # kill 76982 00:16:48.550 07:09:32 -- common/autotest_common.sh@950 -- # wait 76982 00:16:48.809 07:09:32 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:16:48.809 07:09:32 -- target/tls.sh@49 -- # local key hash crc 00:16:48.809 07:09:32 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:48.809 07:09:32 -- target/tls.sh@51 -- # hash=02 00:16:48.809 07:09:32 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:16:48.809 07:09:32 -- target/tls.sh@52 -- # gzip -1 -c 00:16:48.809 07:09:32 -- target/tls.sh@52 -- # tail -c8 00:16:48.809 07:09:32 -- target/tls.sh@52 -- # head -c 4 00:16:48.809 07:09:32 -- target/tls.sh@52 -- # crc='�e�'\''' 00:16:48.809 07:09:32 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:48.809 07:09:32 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:16:48.809 07:09:32 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:48.809 07:09:32 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:48.809 07:09:32 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:48.809 07:09:32 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:48.809 07:09:32 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:48.809 07:09:32 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:16:48.809 07:09:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:48.809 07:09:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:48.809 07:09:32 -- common/autotest_common.sh@10 -- # set +x 00:16:48.809 07:09:32 -- nvmf/common.sh@469 -- # nvmfpid=77699 00:16:48.809 07:09:32 -- nvmf/common.sh@470 -- # waitforlisten 77699 00:16:48.809 07:09:32 -- common/autotest_common.sh@819 -- # '[' -z 77699 ']' 00:16:48.809 07:09:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:48.809 07:09:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.809 07:09:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:48.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.809 07:09:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.809 07:09:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:48.809 07:09:32 -- common/autotest_common.sh@10 -- # set +x 00:16:48.809 [2024-07-11 07:09:32.789973] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:48.809 [2024-07-11 07:09:32.790049] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.068 [2024-07-11 07:09:32.922665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.068 [2024-07-11 07:09:33.003333] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:49.068 [2024-07-11 07:09:33.003499] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.068 [2024-07-11 07:09:33.003513] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.068 [2024-07-11 07:09:33.003521] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.068 [2024-07-11 07:09:33.003551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.634 07:09:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:49.634 07:09:33 -- common/autotest_common.sh@852 -- # return 0 00:16:49.634 07:09:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:49.634 07:09:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:49.634 07:09:33 -- common/autotest_common.sh@10 -- # set +x 00:16:49.892 07:09:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.892 07:09:33 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:49.892 07:09:33 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:49.892 07:09:33 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:49.892 [2024-07-11 07:09:33.939240] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.149 07:09:33 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:50.407 07:09:34 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:50.407 [2024-07-11 07:09:34.451248] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:50.407 [2024-07-11 07:09:34.451480] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.666 07:09:34 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:50.666 malloc0 00:16:50.937 07:09:34 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:50.937 07:09:34 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:51.209 07:09:35 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:51.209 07:09:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:51.209 07:09:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:51.209 07:09:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:51.209 07:09:35 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:16:51.209 07:09:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:51.209 07:09:35 -- target/tls.sh@28 -- # bdevperf_pid=77796 00:16:51.209 07:09:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.209 07:09:35 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:51.209 07:09:35 -- target/tls.sh@31 -- # waitforlisten 77796 /var/tmp/bdevperf.sock 00:16:51.209 07:09:35 -- common/autotest_common.sh@819 -- # '[' -z 77796 ']' 00:16:51.209 07:09:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.209 07:09:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.209 07:09:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.209 07:09:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.209 07:09:35 -- common/autotest_common.sh@10 -- # set +x 00:16:51.209 [2024-07-11 07:09:35.260710] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:51.209 [2024-07-11 07:09:35.260774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77796 ] 00:16:51.467 [2024-07-11 07:09:35.388543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.467 [2024-07-11 07:09:35.478737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.399 07:09:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:52.399 07:09:36 -- common/autotest_common.sh@852 -- # return 0 00:16:52.399 07:09:36 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:52.399 [2024-07-11 07:09:36.327428] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:52.399 TLSTESTn1 00:16:52.399 07:09:36 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:52.657 Running I/O for 10 seconds... 00:17:02.624 00:17:02.624 Latency(us) 00:17:02.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.624 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:02.624 Verification LBA range: start 0x0 length 0x2000 00:17:02.624 TLSTESTn1 : 10.01 6203.33 24.23 0.00 0.00 20602.93 3932.16 22639.71 00:17:02.624 =================================================================================================================== 00:17:02.624 Total : 6203.33 24.23 0.00 0.00 20602.93 3932.16 22639.71 00:17:02.624 0 00:17:02.624 07:09:46 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:02.624 07:09:46 -- target/tls.sh@45 -- # killprocess 77796 00:17:02.624 07:09:46 -- common/autotest_common.sh@926 -- # '[' -z 77796 ']' 00:17:02.624 07:09:46 -- common/autotest_common.sh@930 -- # kill -0 77796 00:17:02.624 07:09:46 -- common/autotest_common.sh@931 -- # uname 00:17:02.624 07:09:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:02.624 07:09:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77796 00:17:02.624 07:09:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:02.624 07:09:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:02.624 killing process with pid 77796 00:17:02.624 07:09:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77796' 00:17:02.624 Received shutdown signal, test time was about 10.000000 seconds 00:17:02.624 00:17:02.624 Latency(us) 00:17:02.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.624 =================================================================================================================== 00:17:02.624 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:02.624 07:09:46 -- common/autotest_common.sh@945 -- # kill 77796 00:17:02.624 07:09:46 -- common/autotest_common.sh@950 -- # wait 77796 00:17:02.882 07:09:46 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:02.882 07:09:46 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:02.882 07:09:46 -- common/autotest_common.sh@640 -- # local es=0 00:17:02.882 07:09:46 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:02.882 07:09:46 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:02.882 07:09:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:02.882 07:09:46 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:02.882 07:09:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:02.882 07:09:46 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:02.882 07:09:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:02.882 07:09:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:02.882 07:09:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:02.882 07:09:46 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:02.882 07:09:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:02.882 07:09:46 -- target/tls.sh@28 -- # bdevperf_pid=77943 00:17:02.882 07:09:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:02.882 07:09:46 -- target/tls.sh@31 -- # waitforlisten 77943 /var/tmp/bdevperf.sock 00:17:02.882 07:09:46 -- common/autotest_common.sh@819 -- # '[' -z 77943 ']' 00:17:02.882 07:09:46 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:02.882 07:09:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.882 07:09:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:02.883 07:09:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.883 07:09:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:02.883 07:09:46 -- common/autotest_common.sh@10 -- # set +x 00:17:03.141 [2024-07-11 07:09:46.952078] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:03.141 [2024-07-11 07:09:46.952176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77943 ] 00:17:03.141 [2024-07-11 07:09:47.091072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.141 [2024-07-11 07:09:47.167319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.074 07:09:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:04.074 07:09:47 -- common/autotest_common.sh@852 -- # return 0 00:17:04.074 07:09:47 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:04.074 [2024-07-11 07:09:48.128631] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:04.074 [2024-07-11 07:09:48.128696] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:04.074 2024/07/11 07:09:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:04.332 request: 00:17:04.332 { 00:17:04.332 "method": "bdev_nvme_attach_controller", 00:17:04.332 "params": { 00:17:04.332 "name": "TLSTEST", 00:17:04.332 "trtype": "tcp", 00:17:04.332 "traddr": "10.0.0.2", 00:17:04.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:04.332 "adrfam": "ipv4", 00:17:04.332 "trsvcid": "4420", 00:17:04.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.332 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:04.332 } 00:17:04.332 } 00:17:04.332 Got JSON-RPC error response 00:17:04.332 GoRPCClient: error on JSON-RPC call 00:17:04.332 07:09:48 -- target/tls.sh@36 -- # killprocess 77943 00:17:04.332 07:09:48 -- common/autotest_common.sh@926 -- # '[' -z 77943 ']' 00:17:04.332 07:09:48 -- common/autotest_common.sh@930 -- # kill -0 77943 00:17:04.332 07:09:48 -- common/autotest_common.sh@931 -- # uname 00:17:04.332 07:09:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:04.332 07:09:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77943 00:17:04.332 07:09:48 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:04.332 killing process with pid 77943 00:17:04.332 07:09:48 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:04.332 07:09:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77943' 00:17:04.332 Received shutdown signal, test time was about 10.000000 seconds 00:17:04.332 00:17:04.332 Latency(us) 00:17:04.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.332 =================================================================================================================== 00:17:04.332 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:04.332 07:09:48 -- common/autotest_common.sh@945 -- # kill 77943 00:17:04.332 07:09:48 -- common/autotest_common.sh@950 -- # wait 77943 00:17:04.590 07:09:48 -- target/tls.sh@37 -- # return 1 00:17:04.590 07:09:48 -- common/autotest_common.sh@643 -- # es=1 00:17:04.590 07:09:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:04.590 07:09:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:04.590 07:09:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:04.590 07:09:48 -- target/tls.sh@183 -- # killprocess 77699 00:17:04.590 07:09:48 -- common/autotest_common.sh@926 -- # '[' -z 77699 ']' 00:17:04.590 07:09:48 -- common/autotest_common.sh@930 -- # kill -0 77699 00:17:04.590 07:09:48 -- common/autotest_common.sh@931 -- # uname 00:17:04.590 07:09:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:04.590 07:09:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77699 00:17:04.590 07:09:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:04.590 07:09:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:04.590 killing process with pid 77699 00:17:04.590 07:09:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77699' 00:17:04.590 07:09:48 -- common/autotest_common.sh@945 -- # kill 77699 00:17:04.590 07:09:48 -- common/autotest_common.sh@950 -- # wait 77699 00:17:04.848 07:09:48 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:04.848 07:09:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:04.848 07:09:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:04.848 07:09:48 -- common/autotest_common.sh@10 -- # set +x 00:17:04.848 07:09:48 -- nvmf/common.sh@469 -- # nvmfpid=77999 00:17:04.848 07:09:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:04.848 07:09:48 -- nvmf/common.sh@470 -- # waitforlisten 77999 00:17:04.848 07:09:48 -- common/autotest_common.sh@819 -- # '[' -z 77999 ']' 00:17:04.848 07:09:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.848 07:09:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:04.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.848 07:09:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.848 07:09:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:04.848 07:09:48 -- common/autotest_common.sh@10 -- # set +x 00:17:04.848 [2024-07-11 07:09:48.764915] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:04.848 [2024-07-11 07:09:48.764975] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.848 [2024-07-11 07:09:48.889785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.106 [2024-07-11 07:09:48.963385] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:05.106 [2024-07-11 07:09:48.963595] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.106 [2024-07-11 07:09:48.963609] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.106 [2024-07-11 07:09:48.963617] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.106 [2024-07-11 07:09:48.963641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.671 07:09:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:05.671 07:09:49 -- common/autotest_common.sh@852 -- # return 0 00:17:05.671 07:09:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:05.671 07:09:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:05.671 07:09:49 -- common/autotest_common.sh@10 -- # set +x 00:17:05.672 07:09:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.672 07:09:49 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.672 07:09:49 -- common/autotest_common.sh@640 -- # local es=0 00:17:05.672 07:09:49 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.672 07:09:49 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:05.672 07:09:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:05.672 07:09:49 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:05.672 07:09:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:05.672 07:09:49 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.672 07:09:49 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.672 07:09:49 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:05.929 [2024-07-11 07:09:49.892974] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.929 07:09:49 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:06.186 07:09:50 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:06.442 [2024-07-11 07:09:50.405093] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:06.442 [2024-07-11 07:09:50.405299] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.442 07:09:50 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:06.699 malloc0 00:17:06.699 07:09:50 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:06.956 07:09:50 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:07.215 [2024-07-11 07:09:51.137153] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:07.215 [2024-07-11 07:09:51.137187] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:07.215 [2024-07-11 07:09:51.137212] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:07.215 2024/07/11 07:09:51 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:07.215 request: 00:17:07.215 { 00:17:07.215 "method": "nvmf_subsystem_add_host", 00:17:07.215 "params": { 00:17:07.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.215 "host": "nqn.2016-06.io.spdk:host1", 00:17:07.215 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:07.215 } 00:17:07.215 } 00:17:07.215 Got JSON-RPC error response 00:17:07.215 GoRPCClient: error on JSON-RPC call 00:17:07.215 07:09:51 -- common/autotest_common.sh@643 -- # es=1 00:17:07.215 07:09:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:07.215 07:09:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:07.215 07:09:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:07.215 07:09:51 -- target/tls.sh@189 -- # killprocess 77999 00:17:07.215 07:09:51 -- common/autotest_common.sh@926 -- # '[' -z 77999 ']' 00:17:07.215 07:09:51 -- common/autotest_common.sh@930 -- # kill -0 77999 00:17:07.215 07:09:51 -- common/autotest_common.sh@931 -- # uname 00:17:07.215 07:09:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:07.215 07:09:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77999 00:17:07.215 killing process with pid 77999 00:17:07.215 07:09:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:07.215 07:09:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:07.215 07:09:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77999' 00:17:07.215 07:09:51 -- common/autotest_common.sh@945 -- # kill 77999 00:17:07.215 07:09:51 -- common/autotest_common.sh@950 -- # wait 77999 00:17:07.474 07:09:51 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:07.474 07:09:51 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:07.474 07:09:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:07.474 07:09:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:07.474 07:09:51 -- common/autotest_common.sh@10 -- # set +x 00:17:07.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.474 07:09:51 -- nvmf/common.sh@469 -- # nvmfpid=78110 00:17:07.474 07:09:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:07.474 07:09:51 -- nvmf/common.sh@470 -- # waitforlisten 78110 00:17:07.474 07:09:51 -- common/autotest_common.sh@819 -- # '[' -z 78110 ']' 00:17:07.474 07:09:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.474 07:09:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:07.474 07:09:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.474 07:09:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:07.474 07:09:51 -- common/autotest_common.sh@10 -- # set +x 00:17:07.474 [2024-07-11 07:09:51.476743] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:07.474 [2024-07-11 07:09:51.477044] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.733 [2024-07-11 07:09:51.613986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.733 [2024-07-11 07:09:51.696162] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:07.733 [2024-07-11 07:09:51.696643] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.733 [2024-07-11 07:09:51.696776] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.733 [2024-07-11 07:09:51.696795] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.733 [2024-07-11 07:09:51.696834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.666 07:09:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:08.666 07:09:52 -- common/autotest_common.sh@852 -- # return 0 00:17:08.666 07:09:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:08.666 07:09:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:08.666 07:09:52 -- common/autotest_common.sh@10 -- # set +x 00:17:08.666 07:09:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.666 07:09:52 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:08.666 07:09:52 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:08.666 07:09:52 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:08.924 [2024-07-11 07:09:52.733394] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.924 07:09:52 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:09.182 07:09:53 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:09.182 [2024-07-11 07:09:53.185528] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:09.182 [2024-07-11 07:09:53.185718] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.182 07:09:53 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:09.440 malloc0 00:17:09.440 07:09:53 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:09.699 07:09:53 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:09.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.960 07:09:53 -- target/tls.sh@197 -- # bdevperf_pid=78207 00:17:09.960 07:09:53 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:09.960 07:09:53 -- target/tls.sh@200 -- # waitforlisten 78207 /var/tmp/bdevperf.sock 00:17:09.960 07:09:53 -- common/autotest_common.sh@819 -- # '[' -z 78207 ']' 00:17:09.960 07:09:53 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:09.960 07:09:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.960 07:09:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:09.960 07:09:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.960 07:09:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:09.960 07:09:53 -- common/autotest_common.sh@10 -- # set +x 00:17:09.960 [2024-07-11 07:09:53.934387] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:09.960 [2024-07-11 07:09:53.934483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78207 ] 00:17:10.217 [2024-07-11 07:09:54.061616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.217 [2024-07-11 07:09:54.154565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.148 07:09:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:11.148 07:09:54 -- common/autotest_common.sh@852 -- # return 0 00:17:11.148 07:09:54 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.148 [2024-07-11 07:09:55.067361] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:11.148 TLSTESTn1 00:17:11.148 07:09:55 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:11.714 07:09:55 -- target/tls.sh@205 -- # tgtconf='{ 00:17:11.714 "subsystems": [ 00:17:11.714 { 00:17:11.714 "subsystem": "iobuf", 00:17:11.714 "config": [ 00:17:11.714 { 00:17:11.714 "method": "iobuf_set_options", 00:17:11.714 "params": { 00:17:11.714 "large_bufsize": 135168, 00:17:11.714 "large_pool_count": 1024, 00:17:11.714 "small_bufsize": 8192, 00:17:11.714 "small_pool_count": 8192 00:17:11.714 } 00:17:11.714 } 00:17:11.714 ] 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "subsystem": "sock", 00:17:11.714 "config": [ 00:17:11.714 { 00:17:11.714 "method": "sock_impl_set_options", 00:17:11.714 "params": { 00:17:11.714 "enable_ktls": false, 00:17:11.714 "enable_placement_id": 0, 00:17:11.714 "enable_quickack": false, 00:17:11.714 "enable_recv_pipe": true, 00:17:11.714 "enable_zerocopy_send_client": false, 00:17:11.714 "enable_zerocopy_send_server": true, 00:17:11.714 "impl_name": "posix", 00:17:11.714 "recv_buf_size": 2097152, 00:17:11.714 "send_buf_size": 2097152, 00:17:11.714 "tls_version": 0, 00:17:11.714 "zerocopy_threshold": 0 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "sock_impl_set_options", 00:17:11.714 "params": { 00:17:11.714 "enable_ktls": false, 00:17:11.714 "enable_placement_id": 0, 00:17:11.714 "enable_quickack": false, 00:17:11.714 "enable_recv_pipe": true, 00:17:11.714 "enable_zerocopy_send_client": false, 00:17:11.714 "enable_zerocopy_send_server": true, 00:17:11.714 "impl_name": "ssl", 00:17:11.714 "recv_buf_size": 4096, 00:17:11.714 "send_buf_size": 4096, 00:17:11.714 "tls_version": 0, 00:17:11.714 "zerocopy_threshold": 0 00:17:11.714 } 00:17:11.714 } 00:17:11.714 ] 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "subsystem": "vmd", 00:17:11.714 "config": [] 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "subsystem": "accel", 00:17:11.714 "config": [ 00:17:11.714 { 00:17:11.714 "method": "accel_set_options", 00:17:11.714 "params": { 00:17:11.714 "buf_count": 2048, 00:17:11.714 "large_cache_size": 16, 00:17:11.714 "sequence_count": 2048, 00:17:11.714 "small_cache_size": 128, 00:17:11.714 "task_count": 2048 00:17:11.714 } 00:17:11.714 } 00:17:11.714 ] 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "subsystem": "bdev", 00:17:11.714 "config": [ 00:17:11.714 { 00:17:11.714 "method": "bdev_set_options", 00:17:11.714 "params": { 00:17:11.714 "bdev_auto_examine": true, 00:17:11.714 "bdev_io_cache_size": 256, 00:17:11.714 "bdev_io_pool_size": 65535, 00:17:11.714 "iobuf_large_cache_size": 16, 00:17:11.714 "iobuf_small_cache_size": 128 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "bdev_raid_set_options", 00:17:11.714 "params": { 00:17:11.714 "process_window_size_kb": 1024 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "bdev_iscsi_set_options", 00:17:11.714 "params": { 00:17:11.714 "timeout_sec": 30 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "bdev_nvme_set_options", 00:17:11.714 "params": { 00:17:11.714 "action_on_timeout": "none", 00:17:11.714 "allow_accel_sequence": false, 00:17:11.714 "arbitration_burst": 0, 00:17:11.714 "bdev_retry_count": 3, 00:17:11.714 "ctrlr_loss_timeout_sec": 0, 00:17:11.714 "delay_cmd_submit": true, 00:17:11.714 "fast_io_fail_timeout_sec": 0, 00:17:11.714 "generate_uuids": false, 00:17:11.714 "high_priority_weight": 0, 00:17:11.714 "io_path_stat": false, 00:17:11.714 "io_queue_requests": 0, 00:17:11.714 "keep_alive_timeout_ms": 10000, 00:17:11.714 "low_priority_weight": 0, 00:17:11.714 "medium_priority_weight": 0, 00:17:11.714 "nvme_adminq_poll_period_us": 10000, 00:17:11.714 "nvme_ioq_poll_period_us": 0, 00:17:11.714 "reconnect_delay_sec": 0, 00:17:11.714 "timeout_admin_us": 0, 00:17:11.714 "timeout_us": 0, 00:17:11.714 "transport_ack_timeout": 0, 00:17:11.714 "transport_retry_count": 4, 00:17:11.714 "transport_tos": 0 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "bdev_nvme_set_hotplug", 00:17:11.714 "params": { 00:17:11.714 "enable": false, 00:17:11.714 "period_us": 100000 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "bdev_malloc_create", 00:17:11.714 "params": { 00:17:11.714 "block_size": 4096, 00:17:11.714 "name": "malloc0", 00:17:11.714 "num_blocks": 8192, 00:17:11.714 "optimal_io_boundary": 0, 00:17:11.714 "physical_block_size": 4096, 00:17:11.714 "uuid": "17d227f3-94b5-4f3e-b37b-24467fc28431" 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "bdev_wait_for_examine" 00:17:11.714 } 00:17:11.714 ] 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "subsystem": "nbd", 00:17:11.714 "config": [] 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "subsystem": "scheduler", 00:17:11.714 "config": [ 00:17:11.714 { 00:17:11.714 "method": "framework_set_scheduler", 00:17:11.714 "params": { 00:17:11.714 "name": "static" 00:17:11.714 } 00:17:11.714 } 00:17:11.714 ] 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "subsystem": "nvmf", 00:17:11.714 "config": [ 00:17:11.714 { 00:17:11.714 "method": "nvmf_set_config", 00:17:11.714 "params": { 00:17:11.714 "admin_cmd_passthru": { 00:17:11.714 "identify_ctrlr": false 00:17:11.714 }, 00:17:11.714 "discovery_filter": "match_any" 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "nvmf_set_max_subsystems", 00:17:11.714 "params": { 00:17:11.714 "max_subsystems": 1024 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "nvmf_set_crdt", 00:17:11.714 "params": { 00:17:11.714 "crdt1": 0, 00:17:11.714 "crdt2": 0, 00:17:11.714 "crdt3": 0 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "nvmf_create_transport", 00:17:11.714 "params": { 00:17:11.714 "abort_timeout_sec": 1, 00:17:11.714 "buf_cache_size": 4294967295, 00:17:11.714 "c2h_success": false, 00:17:11.714 "dif_insert_or_strip": false, 00:17:11.714 "in_capsule_data_size": 4096, 00:17:11.714 "io_unit_size": 131072, 00:17:11.714 "max_aq_depth": 128, 00:17:11.714 "max_io_qpairs_per_ctrlr": 127, 00:17:11.714 "max_io_size": 131072, 00:17:11.714 "max_queue_depth": 128, 00:17:11.714 "num_shared_buffers": 511, 00:17:11.714 "sock_priority": 0, 00:17:11.714 "trtype": "TCP", 00:17:11.714 "zcopy": false 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "nvmf_create_subsystem", 00:17:11.714 "params": { 00:17:11.714 "allow_any_host": false, 00:17:11.714 "ana_reporting": false, 00:17:11.714 "max_cntlid": 65519, 00:17:11.714 "max_namespaces": 10, 00:17:11.714 "min_cntlid": 1, 00:17:11.714 "model_number": "SPDK bdev Controller", 00:17:11.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.714 "serial_number": "SPDK00000000000001" 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "nvmf_subsystem_add_host", 00:17:11.714 "params": { 00:17:11.714 "host": "nqn.2016-06.io.spdk:host1", 00:17:11.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.714 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "nvmf_subsystem_add_ns", 00:17:11.714 "params": { 00:17:11.714 "namespace": { 00:17:11.714 "bdev_name": "malloc0", 00:17:11.714 "nguid": "17D227F394B54F3EB37B24467FC28431", 00:17:11.714 "nsid": 1, 00:17:11.714 "uuid": "17d227f3-94b5-4f3e-b37b-24467fc28431" 00:17:11.714 }, 00:17:11.714 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:11.714 } 00:17:11.714 }, 00:17:11.714 { 00:17:11.714 "method": "nvmf_subsystem_add_listener", 00:17:11.714 "params": { 00:17:11.715 "listen_address": { 00:17:11.715 "adrfam": "IPv4", 00:17:11.715 "traddr": "10.0.0.2", 00:17:11.715 "trsvcid": "4420", 00:17:11.715 "trtype": "TCP" 00:17:11.715 }, 00:17:11.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.715 "secure_channel": true 00:17:11.715 } 00:17:11.715 } 00:17:11.715 ] 00:17:11.715 } 00:17:11.715 ] 00:17:11.715 }' 00:17:11.715 07:09:55 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:11.973 07:09:55 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:11.973 "subsystems": [ 00:17:11.973 { 00:17:11.973 "subsystem": "iobuf", 00:17:11.973 "config": [ 00:17:11.973 { 00:17:11.973 "method": "iobuf_set_options", 00:17:11.973 "params": { 00:17:11.973 "large_bufsize": 135168, 00:17:11.973 "large_pool_count": 1024, 00:17:11.973 "small_bufsize": 8192, 00:17:11.973 "small_pool_count": 8192 00:17:11.973 } 00:17:11.973 } 00:17:11.973 ] 00:17:11.973 }, 00:17:11.973 { 00:17:11.973 "subsystem": "sock", 00:17:11.973 "config": [ 00:17:11.973 { 00:17:11.973 "method": "sock_impl_set_options", 00:17:11.973 "params": { 00:17:11.973 "enable_ktls": false, 00:17:11.973 "enable_placement_id": 0, 00:17:11.973 "enable_quickack": false, 00:17:11.973 "enable_recv_pipe": true, 00:17:11.973 "enable_zerocopy_send_client": false, 00:17:11.973 "enable_zerocopy_send_server": true, 00:17:11.973 "impl_name": "posix", 00:17:11.973 "recv_buf_size": 2097152, 00:17:11.973 "send_buf_size": 2097152, 00:17:11.973 "tls_version": 0, 00:17:11.973 "zerocopy_threshold": 0 00:17:11.973 } 00:17:11.973 }, 00:17:11.973 { 00:17:11.973 "method": "sock_impl_set_options", 00:17:11.973 "params": { 00:17:11.973 "enable_ktls": false, 00:17:11.973 "enable_placement_id": 0, 00:17:11.973 "enable_quickack": false, 00:17:11.973 "enable_recv_pipe": true, 00:17:11.973 "enable_zerocopy_send_client": false, 00:17:11.973 "enable_zerocopy_send_server": true, 00:17:11.973 "impl_name": "ssl", 00:17:11.973 "recv_buf_size": 4096, 00:17:11.973 "send_buf_size": 4096, 00:17:11.973 "tls_version": 0, 00:17:11.973 "zerocopy_threshold": 0 00:17:11.973 } 00:17:11.973 } 00:17:11.973 ] 00:17:11.973 }, 00:17:11.973 { 00:17:11.973 "subsystem": "vmd", 00:17:11.973 "config": [] 00:17:11.973 }, 00:17:11.973 { 00:17:11.973 "subsystem": "accel", 00:17:11.973 "config": [ 00:17:11.973 { 00:17:11.973 "method": "accel_set_options", 00:17:11.973 "params": { 00:17:11.973 "buf_count": 2048, 00:17:11.973 "large_cache_size": 16, 00:17:11.973 "sequence_count": 2048, 00:17:11.973 "small_cache_size": 128, 00:17:11.973 "task_count": 2048 00:17:11.973 } 00:17:11.973 } 00:17:11.973 ] 00:17:11.973 }, 00:17:11.973 { 00:17:11.973 "subsystem": "bdev", 00:17:11.973 "config": [ 00:17:11.973 { 00:17:11.973 "method": "bdev_set_options", 00:17:11.973 "params": { 00:17:11.973 "bdev_auto_examine": true, 00:17:11.973 "bdev_io_cache_size": 256, 00:17:11.973 "bdev_io_pool_size": 65535, 00:17:11.973 "iobuf_large_cache_size": 16, 00:17:11.973 "iobuf_small_cache_size": 128 00:17:11.973 } 00:17:11.973 }, 00:17:11.973 { 00:17:11.973 "method": "bdev_raid_set_options", 00:17:11.973 "params": { 00:17:11.973 "process_window_size_kb": 1024 00:17:11.973 } 00:17:11.973 }, 00:17:11.973 { 00:17:11.973 "method": "bdev_iscsi_set_options", 00:17:11.973 "params": { 00:17:11.973 "timeout_sec": 30 00:17:11.973 } 00:17:11.973 }, 00:17:11.973 { 00:17:11.973 "method": "bdev_nvme_set_options", 00:17:11.973 "params": { 00:17:11.973 "action_on_timeout": "none", 00:17:11.973 "allow_accel_sequence": false, 00:17:11.973 "arbitration_burst": 0, 00:17:11.973 "bdev_retry_count": 3, 00:17:11.973 "ctrlr_loss_timeout_sec": 0, 00:17:11.973 "delay_cmd_submit": true, 00:17:11.973 "fast_io_fail_timeout_sec": 0, 00:17:11.973 "generate_uuids": false, 00:17:11.973 "high_priority_weight": 0, 00:17:11.973 "io_path_stat": false, 00:17:11.973 "io_queue_requests": 512, 00:17:11.973 "keep_alive_timeout_ms": 10000, 00:17:11.973 "low_priority_weight": 0, 00:17:11.973 "medium_priority_weight": 0, 00:17:11.973 "nvme_adminq_poll_period_us": 10000, 00:17:11.973 "nvme_ioq_poll_period_us": 0, 00:17:11.973 "reconnect_delay_sec": 0, 00:17:11.973 "timeout_admin_us": 0, 00:17:11.973 "timeout_us": 0, 00:17:11.973 "transport_ack_timeout": 0, 00:17:11.973 "transport_retry_count": 4, 00:17:11.973 "transport_tos": 0 00:17:11.973 } 00:17:11.973 }, 00:17:11.973 { 00:17:11.973 "method": "bdev_nvme_attach_controller", 00:17:11.973 "params": { 00:17:11.973 "adrfam": "IPv4", 00:17:11.973 "ctrlr_loss_timeout_sec": 0, 00:17:11.973 "ddgst": false, 00:17:11.973 "fast_io_fail_timeout_sec": 0, 00:17:11.973 "hdgst": false, 00:17:11.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.973 "name": "TLSTEST", 00:17:11.973 "prchk_guard": false, 00:17:11.973 "prchk_reftag": false, 00:17:11.973 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:11.973 "reconnect_delay_sec": 0, 00:17:11.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.973 "traddr": "10.0.0.2", 00:17:11.973 "trsvcid": "4420", 00:17:11.973 "trtype": "TCP" 00:17:11.973 } 00:17:11.973 }, 00:17:11.973 { 00:17:11.973 "method": "bdev_nvme_set_hotplug", 00:17:11.973 "params": { 00:17:11.973 "enable": false, 00:17:11.973 "period_us": 100000 00:17:11.973 } 00:17:11.973 }, 00:17:11.973 { 00:17:11.973 "method": "bdev_wait_for_examine" 00:17:11.973 } 00:17:11.973 ] 00:17:11.973 }, 00:17:11.973 { 00:17:11.973 "subsystem": "nbd", 00:17:11.973 "config": [] 00:17:11.973 } 00:17:11.973 ] 00:17:11.973 }' 00:17:11.973 07:09:55 -- target/tls.sh@208 -- # killprocess 78207 00:17:11.973 07:09:55 -- common/autotest_common.sh@926 -- # '[' -z 78207 ']' 00:17:11.973 07:09:55 -- common/autotest_common.sh@930 -- # kill -0 78207 00:17:11.973 07:09:55 -- common/autotest_common.sh@931 -- # uname 00:17:11.973 07:09:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:11.973 07:09:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78207 00:17:11.973 killing process with pid 78207 00:17:11.973 Received shutdown signal, test time was about 10.000000 seconds 00:17:11.973 00:17:11.973 Latency(us) 00:17:11.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.973 =================================================================================================================== 00:17:11.973 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:11.973 07:09:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:11.973 07:09:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:11.973 07:09:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78207' 00:17:11.973 07:09:55 -- common/autotest_common.sh@945 -- # kill 78207 00:17:11.973 07:09:55 -- common/autotest_common.sh@950 -- # wait 78207 00:17:12.231 07:09:56 -- target/tls.sh@209 -- # killprocess 78110 00:17:12.231 07:09:56 -- common/autotest_common.sh@926 -- # '[' -z 78110 ']' 00:17:12.231 07:09:56 -- common/autotest_common.sh@930 -- # kill -0 78110 00:17:12.231 07:09:56 -- common/autotest_common.sh@931 -- # uname 00:17:12.231 07:09:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:12.231 07:09:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78110 00:17:12.231 killing process with pid 78110 00:17:12.231 07:09:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:12.231 07:09:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:12.231 07:09:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78110' 00:17:12.231 07:09:56 -- common/autotest_common.sh@945 -- # kill 78110 00:17:12.231 07:09:56 -- common/autotest_common.sh@950 -- # wait 78110 00:17:12.490 07:09:56 -- target/tls.sh@212 -- # echo '{ 00:17:12.490 "subsystems": [ 00:17:12.490 { 00:17:12.490 "subsystem": "iobuf", 00:17:12.490 "config": [ 00:17:12.490 { 00:17:12.490 "method": "iobuf_set_options", 00:17:12.490 "params": { 00:17:12.490 "large_bufsize": 135168, 00:17:12.490 "large_pool_count": 1024, 00:17:12.490 "small_bufsize": 8192, 00:17:12.490 "small_pool_count": 8192 00:17:12.490 } 00:17:12.490 } 00:17:12.490 ] 00:17:12.490 }, 00:17:12.490 { 00:17:12.490 "subsystem": "sock", 00:17:12.490 "config": [ 00:17:12.490 { 00:17:12.490 "method": "sock_impl_set_options", 00:17:12.490 "params": { 00:17:12.490 "enable_ktls": false, 00:17:12.490 "enable_placement_id": 0, 00:17:12.490 "enable_quickack": false, 00:17:12.490 "enable_recv_pipe": true, 00:17:12.490 "enable_zerocopy_send_client": false, 00:17:12.490 "enable_zerocopy_send_server": true, 00:17:12.490 "impl_name": "posix", 00:17:12.490 "recv_buf_size": 2097152, 00:17:12.490 "send_buf_size": 2097152, 00:17:12.490 "tls_version": 0, 00:17:12.490 "zerocopy_threshold": 0 00:17:12.490 } 00:17:12.490 }, 00:17:12.490 { 00:17:12.490 "method": "sock_impl_set_options", 00:17:12.490 "params": { 00:17:12.490 "enable_ktls": false, 00:17:12.490 "enable_placement_id": 0, 00:17:12.490 "enable_quickack": false, 00:17:12.490 "enable_recv_pipe": true, 00:17:12.490 "enable_zerocopy_send_client": false, 00:17:12.490 "enable_zerocopy_send_server": true, 00:17:12.490 "impl_name": "ssl", 00:17:12.490 "recv_buf_size": 4096, 00:17:12.490 "send_buf_size": 4096, 00:17:12.490 "tls_version": 0, 00:17:12.490 "zerocopy_threshold": 0 00:17:12.490 } 00:17:12.490 } 00:17:12.490 ] 00:17:12.490 }, 00:17:12.490 { 00:17:12.490 "subsystem": "vmd", 00:17:12.490 "config": [] 00:17:12.490 }, 00:17:12.490 { 00:17:12.490 "subsystem": "accel", 00:17:12.490 "config": [ 00:17:12.490 { 00:17:12.490 "method": "accel_set_options", 00:17:12.490 "params": { 00:17:12.490 "buf_count": 2048, 00:17:12.490 "large_cache_size": 16, 00:17:12.490 "sequence_count": 2048, 00:17:12.490 "small_cache_size": 128, 00:17:12.490 "task_count": 2048 00:17:12.490 } 00:17:12.490 } 00:17:12.490 ] 00:17:12.490 }, 00:17:12.490 { 00:17:12.490 "subsystem": "bdev", 00:17:12.490 "config": [ 00:17:12.490 { 00:17:12.490 "method": "bdev_set_options", 00:17:12.490 "params": { 00:17:12.490 "bdev_auto_examine": true, 00:17:12.490 "bdev_io_cache_size": 256, 00:17:12.490 "bdev_io_pool_size": 65535, 00:17:12.490 "iobuf_large_cache_size": 16, 00:17:12.490 "iobuf_small_cache_size": 128 00:17:12.490 } 00:17:12.490 }, 00:17:12.490 { 00:17:12.490 "method": "bdev_raid_set_options", 00:17:12.490 "params": { 00:17:12.490 "process_window_size_kb": 1024 00:17:12.490 } 00:17:12.490 }, 00:17:12.490 { 00:17:12.490 "method": "bdev_iscsi_set_options", 00:17:12.490 "params": { 00:17:12.490 "timeout_sec": 30 00:17:12.490 } 00:17:12.490 }, 00:17:12.490 { 00:17:12.490 "method": "bdev_nvme_set_options", 00:17:12.490 "params": { 00:17:12.490 "action_on_timeout": "none", 00:17:12.490 "allow_accel_sequence": false, 00:17:12.490 "arbitration_burst": 0, 00:17:12.490 "bdev_retry_count": 3, 00:17:12.490 "ctrlr_loss_timeout_sec": 0, 00:17:12.490 "delay_cmd_submit": true, 00:17:12.490 "fast_io_fail_timeout_sec": 0, 00:17:12.490 "generate_uuids": false, 00:17:12.490 "high_priority_weight": 0, 00:17:12.490 "io_path_stat": false, 00:17:12.490 "io_queue_requests": 0, 00:17:12.490 "keep_alive_timeout_ms": 10000, 00:17:12.490 "low_priority_weight": 0, 00:17:12.490 "medium_priority_weight": 0, 00:17:12.490 "nvme_adminq_poll_period_us": 10000, 00:17:12.490 "nvme_ioq_poll_period_us": 0, 00:17:12.490 "reconnect_delay_sec": 0, 00:17:12.491 "timeout_admin_us": 0, 00:17:12.491 "timeout_us": 0, 00:17:12.491 "transport_ack_timeout": 0, 00:17:12.491 "transport_retry_count": 4, 00:17:12.491 "transport_tos": 0 00:17:12.491 } 00:17:12.491 }, 00:17:12.491 { 00:17:12.491 "method": "bdev_nvme_set_hotplug", 00:17:12.491 "params": { 00:17:12.491 "enable": false, 00:17:12.491 "period_us": 100000 00:17:12.491 } 00:17:12.491 }, 00:17:12.491 { 00:17:12.491 "method": "bdev_malloc_create", 00:17:12.491 "params": { 00:17:12.491 "block_size": 4096, 00:17:12.491 "name": "malloc0", 00:17:12.491 "num_blocks": 8192, 00:17:12.491 "optimal_io_boundary": 0, 00:17:12.491 "physical_block_size": 4096, 00:17:12.491 "uuid": "17d227f3-94b5-4f3e-b37b-24467fc28431" 00:17:12.491 } 00:17:12.491 }, 00:17:12.491 { 00:17:12.491 "method": "bdev_wait_for_examine" 00:17:12.491 } 00:17:12.491 ] 00:17:12.491 }, 00:17:12.491 { 00:17:12.491 "subsystem": "nbd", 00:17:12.491 "config": [] 00:17:12.491 }, 00:17:12.491 { 00:17:12.491 "subsystem": "scheduler", 00:17:12.491 "config": [ 00:17:12.491 { 00:17:12.491 "method": "framework_set_scheduler", 00:17:12.491 "params": { 00:17:12.491 "name": "static" 00:17:12.491 } 00:17:12.491 } 00:17:12.491 ] 00:17:12.491 }, 00:17:12.491 { 00:17:12.491 "subsystem": "nvmf", 00:17:12.491 "config": [ 00:17:12.491 { 00:17:12.491 "method": "nvmf_set_config", 00:17:12.491 "params": { 00:17:12.491 "admin_cmd_passthru": { 00:17:12.491 "identify_ctrlr": false 00:17:12.491 }, 00:17:12.491 "discovery_filter": "match_any" 00:17:12.491 } 00:17:12.491 }, 00:17:12.491 { 00:17:12.491 "method": "nvmf_set_max_subsystems", 00:17:12.491 "params": { 00:17:12.491 "max_subsystems": 1024 00:17:12.491 } 00:17:12.491 }, 00:17:12.491 { 00:17:12.491 "method": "nvmf_set_crdt", 00:17:12.491 "params": { 00:17:12.491 "crdt1": 0, 00:17:12.491 "crdt2": 0, 00:17:12.491 "crdt3": 0 00:17:12.491 } 00:17:12.491 }, 00:17:12.491 { 00:17:12.491 "method": "nvmf_create_transport", 00:17:12.491 "params": { 00:17:12.491 "abort_timeout_sec": 1, 00:17:12.491 "buf_cache_size": 4294967295, 00:17:12.491 "c2h_success": false, 00:17:12.491 "dif_insert_or_strip": false, 00:17:12.491 "in_capsule_data_size": 4096, 00:17:12.491 "io_unit_size": 131072, 00:17:12.491 "max_aq_depth": 128, 00:17:12.491 "max_io_qpairs_per_ctrlr": 127, 00:17:12.491 "max_io_size": 131072, 00:17:12.491 "max_queue_depth": 128, 00:17:12.491 "num_shared_buffers": 511, 00:17:12.491 "sock_priority": 0, 00:17:12.491 "trtype": "TCP", 00:17:12.491 "zcopy": false 00:17:12.491 } 00:17:12.491 }, 00:17:12.491 { 00:17:12.491 "method": "nvmf_create_subsystem", 00:17:12.491 "params": { 00:17:12.491 "allow_any_host": false, 00:17:12.491 "ana_reporting": false, 00:17:12.491 "max_cntlid": 65519, 00:17:12.491 "max_namespaces": 10, 00:17:12.491 "min_cntlid": 1, 00:17:12.491 "model_number": "SPDK bdev Controller", 00:17:12.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.491 "serial_number": "SPDK00000000000001" 00:17:12.491 } 00:17:12.491 }, 00:17:12.491 { 00:17:12.491 "method": "nvmf_subsystem_add_host", 00:17:12.491 "params": { 00:17:12.491 "host": "nqn.2016-06.io.spdk:host1", 00:17:12.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.491 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:12.491 } 00:17:12.491 }, 00:17:12.491 { 00:17:12.491 "method": "nvmf_subsystem_add_ns", 00:17:12.491 "params": { 00:17:12.491 "namespace": { 00:17:12.491 "bdev_name": "malloc0", 00:17:12.491 "nguid": "17D227F394B54F3EB37B24467FC28431", 00:17:12.491 "nsid": 1, 00:17:12.491 "uuid": "17d227f3-94b5-4f3e-b37b-24467fc28431" 00:17:12.491 }, 00:17:12.491 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:12.491 } 00:17:12.491 }, 00:17:12.491 { 00:17:12.491 "method": "nvmf_subsystem_add_listener", 00:17:12.491 "params": { 00:17:12.491 "listen_address": { 00:17:12.491 "adrfam": "IPv4", 00:17:12.491 "traddr": "10.0.0.2", 00:17:12.491 "trsvcid": "4420", 00:17:12.491 "trtype": "TCP" 00:17:12.491 }, 00:17:12.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.491 "secure_channel": true 00:17:12.491 } 00:17:12.491 } 00:17:12.491 ] 00:17:12.491 } 00:17:12.491 ] 00:17:12.491 }' 00:17:12.491 07:09:56 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:12.491 07:09:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:12.491 07:09:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:12.491 07:09:56 -- common/autotest_common.sh@10 -- # set +x 00:17:12.491 07:09:56 -- nvmf/common.sh@469 -- # nvmfpid=78286 00:17:12.491 07:09:56 -- nvmf/common.sh@470 -- # waitforlisten 78286 00:17:12.491 07:09:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:12.491 07:09:56 -- common/autotest_common.sh@819 -- # '[' -z 78286 ']' 00:17:12.491 07:09:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.491 07:09:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:12.491 07:09:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.491 07:09:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:12.491 07:09:56 -- common/autotest_common.sh@10 -- # set +x 00:17:12.491 [2024-07-11 07:09:56.432895] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:12.491 [2024-07-11 07:09:56.432993] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.750 [2024-07-11 07:09:56.570659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.750 [2024-07-11 07:09:56.653985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:12.750 [2024-07-11 07:09:56.654127] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.750 [2024-07-11 07:09:56.654140] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.750 [2024-07-11 07:09:56.654148] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.750 [2024-07-11 07:09:56.654180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.008 [2024-07-11 07:09:56.868245] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.008 [2024-07-11 07:09:56.900210] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:13.008 [2024-07-11 07:09:56.900388] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.575 07:09:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:13.575 07:09:57 -- common/autotest_common.sh@852 -- # return 0 00:17:13.575 07:09:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:13.575 07:09:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:13.575 07:09:57 -- common/autotest_common.sh@10 -- # set +x 00:17:13.575 07:09:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.575 07:09:57 -- target/tls.sh@216 -- # bdevperf_pid=78330 00:17:13.575 07:09:57 -- target/tls.sh@217 -- # waitforlisten 78330 /var/tmp/bdevperf.sock 00:17:13.575 07:09:57 -- common/autotest_common.sh@819 -- # '[' -z 78330 ']' 00:17:13.575 07:09:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.575 07:09:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:13.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.575 07:09:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.575 07:09:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:13.575 07:09:57 -- common/autotest_common.sh@10 -- # set +x 00:17:13.575 07:09:57 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:13.575 07:09:57 -- target/tls.sh@213 -- # echo '{ 00:17:13.575 "subsystems": [ 00:17:13.575 { 00:17:13.575 "subsystem": "iobuf", 00:17:13.575 "config": [ 00:17:13.575 { 00:17:13.575 "method": "iobuf_set_options", 00:17:13.575 "params": { 00:17:13.575 "large_bufsize": 135168, 00:17:13.575 "large_pool_count": 1024, 00:17:13.575 "small_bufsize": 8192, 00:17:13.575 "small_pool_count": 8192 00:17:13.575 } 00:17:13.575 } 00:17:13.575 ] 00:17:13.575 }, 00:17:13.575 { 00:17:13.575 "subsystem": "sock", 00:17:13.575 "config": [ 00:17:13.575 { 00:17:13.575 "method": "sock_impl_set_options", 00:17:13.575 "params": { 00:17:13.575 "enable_ktls": false, 00:17:13.575 "enable_placement_id": 0, 00:17:13.575 "enable_quickack": false, 00:17:13.575 "enable_recv_pipe": true, 00:17:13.575 "enable_zerocopy_send_client": false, 00:17:13.575 "enable_zerocopy_send_server": true, 00:17:13.575 "impl_name": "posix", 00:17:13.575 "recv_buf_size": 2097152, 00:17:13.575 "send_buf_size": 2097152, 00:17:13.575 "tls_version": 0, 00:17:13.575 "zerocopy_threshold": 0 00:17:13.575 } 00:17:13.575 }, 00:17:13.575 { 00:17:13.575 "method": "sock_impl_set_options", 00:17:13.575 "params": { 00:17:13.575 "enable_ktls": false, 00:17:13.575 "enable_placement_id": 0, 00:17:13.575 "enable_quickack": false, 00:17:13.575 "enable_recv_pipe": true, 00:17:13.575 "enable_zerocopy_send_client": false, 00:17:13.575 "enable_zerocopy_send_server": true, 00:17:13.575 "impl_name": "ssl", 00:17:13.575 "recv_buf_size": 4096, 00:17:13.575 "send_buf_size": 4096, 00:17:13.575 "tls_version": 0, 00:17:13.575 "zerocopy_threshold": 0 00:17:13.575 } 00:17:13.575 } 00:17:13.575 ] 00:17:13.575 }, 00:17:13.575 { 00:17:13.575 "subsystem": "vmd", 00:17:13.575 "config": [] 00:17:13.575 }, 00:17:13.575 { 00:17:13.575 "subsystem": "accel", 00:17:13.575 "config": [ 00:17:13.575 { 00:17:13.575 "method": "accel_set_options", 00:17:13.575 "params": { 00:17:13.575 "buf_count": 2048, 00:17:13.575 "large_cache_size": 16, 00:17:13.575 "sequence_count": 2048, 00:17:13.575 "small_cache_size": 128, 00:17:13.575 "task_count": 2048 00:17:13.575 } 00:17:13.575 } 00:17:13.575 ] 00:17:13.575 }, 00:17:13.575 { 00:17:13.575 "subsystem": "bdev", 00:17:13.575 "config": [ 00:17:13.575 { 00:17:13.575 "method": "bdev_set_options", 00:17:13.575 "params": { 00:17:13.575 "bdev_auto_examine": true, 00:17:13.575 "bdev_io_cache_size": 256, 00:17:13.576 "bdev_io_pool_size": 65535, 00:17:13.576 "iobuf_large_cache_size": 16, 00:17:13.576 "iobuf_small_cache_size": 128 00:17:13.576 } 00:17:13.576 }, 00:17:13.576 { 00:17:13.576 "method": "bdev_raid_set_options", 00:17:13.576 "params": { 00:17:13.576 "process_window_size_kb": 1024 00:17:13.576 } 00:17:13.576 }, 00:17:13.576 { 00:17:13.576 "method": "bdev_iscsi_set_options", 00:17:13.576 "params": { 00:17:13.576 "timeout_sec": 30 00:17:13.576 } 00:17:13.576 }, 00:17:13.576 { 00:17:13.576 "method": "bdev_nvme_set_options", 00:17:13.576 "params": { 00:17:13.576 "action_on_timeout": "none", 00:17:13.576 "allow_accel_sequence": false, 00:17:13.576 "arbitration_burst": 0, 00:17:13.576 "bdev_retry_count": 3, 00:17:13.576 "ctrlr_loss_timeout_sec": 0, 00:17:13.576 "delay_cmd_submit": true, 00:17:13.576 "fast_io_fail_timeout_sec": 0, 00:17:13.576 "generate_uuids": false, 00:17:13.576 "high_priority_weight": 0, 00:17:13.576 "io_path_stat": false, 00:17:13.576 "io_queue_requests": 512, 00:17:13.576 "keep_alive_timeout_ms": 10000, 00:17:13.576 "low_priority_weight": 0, 00:17:13.576 "medium_priority_weight": 0, 00:17:13.576 "nvme_adminq_poll_period_us": 10000, 00:17:13.576 "nvme_ioq_poll_period_us": 0, 00:17:13.576 "reconnect_delay_sec": 0, 00:17:13.576 "timeout_admin_us": 0, 00:17:13.576 "timeout_us": 0, 00:17:13.576 "transport_ack_timeout": 0, 00:17:13.576 "transport_retry_count": 4, 00:17:13.576 "transport_tos": 0 00:17:13.576 } 00:17:13.576 }, 00:17:13.576 { 00:17:13.576 "method": "bdev_nvme_attach_controller", 00:17:13.576 "params": { 00:17:13.576 "adrfam": "IPv4", 00:17:13.576 "ctrlr_loss_timeout_sec": 0, 00:17:13.576 "ddgst": false, 00:17:13.576 "fast_io_fail_timeout_sec": 0, 00:17:13.576 "hdgst": false, 00:17:13.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:13.576 "name": "TLSTEST", 00:17:13.576 "prchk_guard": false, 00:17:13.576 "prchk_reftag": false, 00:17:13.576 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:13.576 "reconnect_delay_sec": 0, 00:17:13.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.576 "traddr": "10.0.0.2", 00:17:13.576 "trsvcid": "4420", 00:17:13.576 "trtype": "TCP" 00:17:13.576 } 00:17:13.576 }, 00:17:13.576 { 00:17:13.576 "method": "bdev_nvme_set_hotplug", 00:17:13.576 "params": { 00:17:13.576 "enable": false, 00:17:13.576 "period_us": 100000 00:17:13.576 } 00:17:13.576 }, 00:17:13.576 { 00:17:13.576 "method": "bdev_wait_for_examine" 00:17:13.576 } 00:17:13.576 ] 00:17:13.576 }, 00:17:13.576 { 00:17:13.576 "subsystem": "nbd", 00:17:13.576 "config": [] 00:17:13.576 } 00:17:13.576 ] 00:17:13.576 }' 00:17:13.576 [2024-07-11 07:09:57.413679] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:13.576 [2024-07-11 07:09:57.413757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78330 ] 00:17:13.576 [2024-07-11 07:09:57.550336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.834 [2024-07-11 07:09:57.659001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.834 [2024-07-11 07:09:57.836639] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:14.400 07:09:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:14.400 07:09:58 -- common/autotest_common.sh@852 -- # return 0 00:17:14.400 07:09:58 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:14.400 Running I/O for 10 seconds... 00:17:26.597 00:17:26.597 Latency(us) 00:17:26.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.598 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:26.598 Verification LBA range: start 0x0 length 0x2000 00:17:26.598 TLSTESTn1 : 10.02 6291.40 24.58 0.00 0.00 20313.14 6136.55 33840.41 00:17:26.598 =================================================================================================================== 00:17:26.598 Total : 6291.40 24.58 0.00 0.00 20313.14 6136.55 33840.41 00:17:26.598 0 00:17:26.598 07:10:08 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:26.598 07:10:08 -- target/tls.sh@223 -- # killprocess 78330 00:17:26.598 07:10:08 -- common/autotest_common.sh@926 -- # '[' -z 78330 ']' 00:17:26.598 07:10:08 -- common/autotest_common.sh@930 -- # kill -0 78330 00:17:26.598 07:10:08 -- common/autotest_common.sh@931 -- # uname 00:17:26.598 07:10:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:26.598 07:10:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78330 00:17:26.598 07:10:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:26.598 07:10:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:26.598 killing process with pid 78330 00:17:26.598 07:10:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78330' 00:17:26.598 07:10:08 -- common/autotest_common.sh@945 -- # kill 78330 00:17:26.598 Received shutdown signal, test time was about 10.000000 seconds 00:17:26.598 00:17:26.598 Latency(us) 00:17:26.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.598 =================================================================================================================== 00:17:26.598 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.598 07:10:08 -- common/autotest_common.sh@950 -- # wait 78330 00:17:26.598 07:10:08 -- target/tls.sh@224 -- # killprocess 78286 00:17:26.598 07:10:08 -- common/autotest_common.sh@926 -- # '[' -z 78286 ']' 00:17:26.598 07:10:08 -- common/autotest_common.sh@930 -- # kill -0 78286 00:17:26.598 07:10:08 -- common/autotest_common.sh@931 -- # uname 00:17:26.598 07:10:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:26.598 07:10:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78286 00:17:26.598 07:10:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:26.598 07:10:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:26.598 07:10:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78286' 00:17:26.598 killing process with pid 78286 00:17:26.598 07:10:08 -- common/autotest_common.sh@945 -- # kill 78286 00:17:26.598 07:10:08 -- common/autotest_common.sh@950 -- # wait 78286 00:17:26.598 07:10:09 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:26.598 07:10:09 -- target/tls.sh@227 -- # cleanup 00:17:26.598 07:10:09 -- target/tls.sh@15 -- # process_shm --id 0 00:17:26.598 07:10:09 -- common/autotest_common.sh@796 -- # type=--id 00:17:26.598 07:10:09 -- common/autotest_common.sh@797 -- # id=0 00:17:26.598 07:10:09 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:26.598 07:10:09 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:26.598 07:10:09 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:26.598 07:10:09 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:26.598 07:10:09 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:26.598 07:10:09 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:26.598 nvmf_trace.0 00:17:26.598 07:10:09 -- common/autotest_common.sh@811 -- # return 0 00:17:26.598 07:10:09 -- target/tls.sh@16 -- # killprocess 78330 00:17:26.598 07:10:09 -- common/autotest_common.sh@926 -- # '[' -z 78330 ']' 00:17:26.598 07:10:09 -- common/autotest_common.sh@930 -- # kill -0 78330 00:17:26.598 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (78330) - No such process 00:17:26.598 Process with pid 78330 is not found 00:17:26.598 07:10:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 78330 is not found' 00:17:26.598 07:10:09 -- target/tls.sh@17 -- # nvmftestfini 00:17:26.598 07:10:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:26.598 07:10:09 -- nvmf/common.sh@116 -- # sync 00:17:26.598 07:10:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:26.598 07:10:09 -- nvmf/common.sh@119 -- # set +e 00:17:26.598 07:10:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:26.598 07:10:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:26.598 rmmod nvme_tcp 00:17:26.598 rmmod nvme_fabrics 00:17:26.598 rmmod nvme_keyring 00:17:26.598 07:10:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:26.598 07:10:09 -- nvmf/common.sh@123 -- # set -e 00:17:26.598 07:10:09 -- nvmf/common.sh@124 -- # return 0 00:17:26.598 07:10:09 -- nvmf/common.sh@477 -- # '[' -n 78286 ']' 00:17:26.598 07:10:09 -- nvmf/common.sh@478 -- # killprocess 78286 00:17:26.598 07:10:09 -- common/autotest_common.sh@926 -- # '[' -z 78286 ']' 00:17:26.598 07:10:09 -- common/autotest_common.sh@930 -- # kill -0 78286 00:17:26.598 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (78286) - No such process 00:17:26.598 Process with pid 78286 is not found 00:17:26.598 07:10:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 78286 is not found' 00:17:26.598 07:10:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:26.598 07:10:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:26.598 07:10:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:26.598 07:10:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.598 07:10:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:26.598 07:10:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.598 07:10:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.598 07:10:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.598 07:10:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:26.598 07:10:09 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.598 00:17:26.598 real 1m10.964s 00:17:26.598 user 1m44.809s 00:17:26.598 sys 0m27.386s 00:17:26.598 07:10:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:26.598 07:10:09 -- common/autotest_common.sh@10 -- # set +x 00:17:26.598 ************************************ 00:17:26.598 END TEST nvmf_tls 00:17:26.598 ************************************ 00:17:26.598 07:10:09 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:26.598 07:10:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:26.598 07:10:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:26.598 07:10:09 -- common/autotest_common.sh@10 -- # set +x 00:17:26.598 ************************************ 00:17:26.598 START TEST nvmf_fips 00:17:26.598 ************************************ 00:17:26.598 07:10:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:26.598 * Looking for test storage... 00:17:26.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:26.598 07:10:09 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:26.598 07:10:09 -- nvmf/common.sh@7 -- # uname -s 00:17:26.598 07:10:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.598 07:10:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.598 07:10:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.598 07:10:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.598 07:10:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.598 07:10:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.598 07:10:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.598 07:10:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.598 07:10:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.598 07:10:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.598 07:10:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:17:26.598 07:10:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:17:26.598 07:10:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.598 07:10:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.598 07:10:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:26.598 07:10:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:26.598 07:10:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.598 07:10:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.598 07:10:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.598 07:10:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.598 07:10:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.598 07:10:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.598 07:10:09 -- paths/export.sh@5 -- # export PATH 00:17:26.598 07:10:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.598 07:10:09 -- nvmf/common.sh@46 -- # : 0 00:17:26.598 07:10:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:26.598 07:10:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:26.598 07:10:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:26.598 07:10:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.598 07:10:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.598 07:10:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:26.598 07:10:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:26.598 07:10:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:26.598 07:10:09 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:26.598 07:10:09 -- fips/fips.sh@89 -- # check_openssl_version 00:17:26.598 07:10:09 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:26.598 07:10:09 -- fips/fips.sh@85 -- # openssl version 00:17:26.599 07:10:09 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:26.599 07:10:09 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:26.599 07:10:09 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:26.599 07:10:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:26.599 07:10:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:26.599 07:10:09 -- scripts/common.sh@335 -- # IFS=.-: 00:17:26.599 07:10:09 -- scripts/common.sh@335 -- # read -ra ver1 00:17:26.599 07:10:09 -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.599 07:10:09 -- scripts/common.sh@336 -- # read -ra ver2 00:17:26.599 07:10:09 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:26.599 07:10:09 -- scripts/common.sh@339 -- # ver1_l=3 00:17:26.599 07:10:09 -- scripts/common.sh@340 -- # ver2_l=3 00:17:26.599 07:10:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:26.599 07:10:09 -- scripts/common.sh@343 -- # case "$op" in 00:17:26.599 07:10:09 -- scripts/common.sh@347 -- # : 1 00:17:26.599 07:10:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:26.599 07:10:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.599 07:10:09 -- scripts/common.sh@364 -- # decimal 3 00:17:26.599 07:10:09 -- scripts/common.sh@352 -- # local d=3 00:17:26.599 07:10:09 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:26.599 07:10:09 -- scripts/common.sh@354 -- # echo 3 00:17:26.599 07:10:09 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:26.599 07:10:09 -- scripts/common.sh@365 -- # decimal 3 00:17:26.599 07:10:09 -- scripts/common.sh@352 -- # local d=3 00:17:26.599 07:10:09 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:26.599 07:10:09 -- scripts/common.sh@354 -- # echo 3 00:17:26.599 07:10:09 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:26.599 07:10:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:26.599 07:10:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:26.599 07:10:09 -- scripts/common.sh@363 -- # (( v++ )) 00:17:26.599 07:10:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.599 07:10:09 -- scripts/common.sh@364 -- # decimal 0 00:17:26.599 07:10:09 -- scripts/common.sh@352 -- # local d=0 00:17:26.599 07:10:09 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:26.599 07:10:09 -- scripts/common.sh@354 -- # echo 0 00:17:26.599 07:10:09 -- scripts/common.sh@364 -- # ver1[v]=0 00:17:26.599 07:10:09 -- scripts/common.sh@365 -- # decimal 0 00:17:26.599 07:10:09 -- scripts/common.sh@352 -- # local d=0 00:17:26.599 07:10:09 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:26.599 07:10:09 -- scripts/common.sh@354 -- # echo 0 00:17:26.599 07:10:09 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:26.599 07:10:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:26.599 07:10:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:26.599 07:10:09 -- scripts/common.sh@363 -- # (( v++ )) 00:17:26.599 07:10:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.599 07:10:09 -- scripts/common.sh@364 -- # decimal 9 00:17:26.599 07:10:09 -- scripts/common.sh@352 -- # local d=9 00:17:26.599 07:10:09 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:26.599 07:10:09 -- scripts/common.sh@354 -- # echo 9 00:17:26.599 07:10:09 -- scripts/common.sh@364 -- # ver1[v]=9 00:17:26.599 07:10:09 -- scripts/common.sh@365 -- # decimal 0 00:17:26.599 07:10:09 -- scripts/common.sh@352 -- # local d=0 00:17:26.599 07:10:09 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:26.599 07:10:09 -- scripts/common.sh@354 -- # echo 0 00:17:26.599 07:10:09 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:26.599 07:10:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:26.599 07:10:09 -- scripts/common.sh@366 -- # return 0 00:17:26.599 07:10:09 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:26.599 07:10:09 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:26.599 07:10:09 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:26.599 07:10:09 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:26.599 07:10:09 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:26.599 07:10:09 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:26.599 07:10:09 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:26.599 07:10:09 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:26.599 07:10:09 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:26.599 07:10:09 -- fips/fips.sh@114 -- # build_openssl_config 00:17:26.599 07:10:09 -- fips/fips.sh@37 -- # cat 00:17:26.599 07:10:09 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:26.599 07:10:09 -- fips/fips.sh@58 -- # cat - 00:17:26.599 07:10:09 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:26.599 07:10:09 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:26.599 07:10:09 -- fips/fips.sh@117 -- # mapfile -t providers 00:17:26.599 07:10:09 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:17:26.599 07:10:09 -- fips/fips.sh@117 -- # grep name 00:17:26.599 07:10:09 -- fips/fips.sh@117 -- # openssl list -providers 00:17:26.599 07:10:09 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:26.599 07:10:09 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:26.599 07:10:09 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:26.599 07:10:09 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:26.599 07:10:09 -- fips/fips.sh@128 -- # : 00:17:26.599 07:10:09 -- common/autotest_common.sh@640 -- # local es=0 00:17:26.599 07:10:09 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:26.599 07:10:09 -- common/autotest_common.sh@628 -- # local arg=openssl 00:17:26.599 07:10:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.599 07:10:09 -- common/autotest_common.sh@632 -- # type -t openssl 00:17:26.599 07:10:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.599 07:10:09 -- common/autotest_common.sh@634 -- # type -P openssl 00:17:26.599 07:10:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.599 07:10:09 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:17:26.599 07:10:09 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:17:26.599 07:10:09 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:17:26.599 Error setting digest 00:17:26.599 007210D5CE7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:26.599 007210D5CE7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:26.599 07:10:09 -- common/autotest_common.sh@643 -- # es=1 00:17:26.599 07:10:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:26.599 07:10:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:26.599 07:10:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:26.599 07:10:09 -- fips/fips.sh@131 -- # nvmftestinit 00:17:26.599 07:10:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:26.599 07:10:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.599 07:10:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:26.599 07:10:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:26.599 07:10:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:26.599 07:10:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.599 07:10:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.599 07:10:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.599 07:10:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:26.599 07:10:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:26.599 07:10:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:26.599 07:10:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:26.599 07:10:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:26.599 07:10:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:26.599 07:10:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.599 07:10:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.599 07:10:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:26.599 07:10:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:26.599 07:10:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:26.599 07:10:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:26.599 07:10:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:26.599 07:10:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.599 07:10:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:26.599 07:10:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:26.599 07:10:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:26.599 07:10:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:26.599 07:10:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:26.599 07:10:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:26.599 Cannot find device "nvmf_tgt_br" 00:17:26.599 07:10:09 -- nvmf/common.sh@154 -- # true 00:17:26.599 07:10:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:26.599 Cannot find device "nvmf_tgt_br2" 00:17:26.599 07:10:09 -- nvmf/common.sh@155 -- # true 00:17:26.599 07:10:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:26.599 07:10:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:26.599 Cannot find device "nvmf_tgt_br" 00:17:26.599 07:10:09 -- nvmf/common.sh@157 -- # true 00:17:26.599 07:10:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:26.599 Cannot find device "nvmf_tgt_br2" 00:17:26.599 07:10:09 -- nvmf/common.sh@158 -- # true 00:17:26.599 07:10:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:26.599 07:10:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:26.599 07:10:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:26.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:26.599 07:10:09 -- nvmf/common.sh@161 -- # true 00:17:26.599 07:10:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:26.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:26.599 07:10:09 -- nvmf/common.sh@162 -- # true 00:17:26.599 07:10:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:26.599 07:10:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:26.599 07:10:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:26.599 07:10:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:26.599 07:10:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:26.599 07:10:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:26.599 07:10:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:26.599 07:10:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:26.599 07:10:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:26.599 07:10:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:26.599 07:10:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:26.599 07:10:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:26.599 07:10:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:26.599 07:10:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:26.599 07:10:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:26.599 07:10:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:26.600 07:10:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:26.600 07:10:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:26.600 07:10:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:26.600 07:10:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:26.600 07:10:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:26.600 07:10:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:26.600 07:10:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:26.600 07:10:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:26.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:17:26.600 00:17:26.600 --- 10.0.0.2 ping statistics --- 00:17:26.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.600 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:26.600 07:10:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:26.600 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:26.600 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:17:26.600 00:17:26.600 --- 10.0.0.3 ping statistics --- 00:17:26.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.600 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:26.600 07:10:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:26.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:26.600 00:17:26.600 --- 10.0.0.1 ping statistics --- 00:17:26.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.600 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:26.600 07:10:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.600 07:10:09 -- nvmf/common.sh@421 -- # return 0 00:17:26.600 07:10:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:26.600 07:10:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.600 07:10:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:26.600 07:10:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:26.600 07:10:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.600 07:10:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:26.600 07:10:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:26.600 07:10:09 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:26.600 07:10:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:26.600 07:10:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:26.600 07:10:09 -- common/autotest_common.sh@10 -- # set +x 00:17:26.600 07:10:09 -- nvmf/common.sh@469 -- # nvmfpid=78691 00:17:26.600 07:10:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:26.600 07:10:09 -- nvmf/common.sh@470 -- # waitforlisten 78691 00:17:26.600 07:10:09 -- common/autotest_common.sh@819 -- # '[' -z 78691 ']' 00:17:26.600 07:10:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.600 07:10:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:26.600 07:10:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.600 07:10:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:26.600 07:10:09 -- common/autotest_common.sh@10 -- # set +x 00:17:26.600 [2024-07-11 07:10:10.073964] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:26.600 [2024-07-11 07:10:10.074052] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.600 [2024-07-11 07:10:10.211754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.600 [2024-07-11 07:10:10.282149] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:26.600 [2024-07-11 07:10:10.282286] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.600 [2024-07-11 07:10:10.282314] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.600 [2024-07-11 07:10:10.282322] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.600 [2024-07-11 07:10:10.282346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.165 07:10:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:27.165 07:10:10 -- common/autotest_common.sh@852 -- # return 0 00:17:27.165 07:10:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:27.165 07:10:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:27.165 07:10:10 -- common/autotest_common.sh@10 -- # set +x 00:17:27.165 07:10:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.165 07:10:10 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:27.165 07:10:10 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:27.165 07:10:10 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:27.165 07:10:10 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:27.165 07:10:10 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:27.165 07:10:10 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:27.165 07:10:10 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:27.165 07:10:11 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.423 [2024-07-11 07:10:11.244490] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.423 [2024-07-11 07:10:11.260436] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:27.423 [2024-07-11 07:10:11.260642] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.423 malloc0 00:17:27.423 07:10:11 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.423 07:10:11 -- fips/fips.sh@148 -- # bdevperf_pid=78743 00:17:27.423 07:10:11 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.423 07:10:11 -- fips/fips.sh@149 -- # waitforlisten 78743 /var/tmp/bdevperf.sock 00:17:27.423 07:10:11 -- common/autotest_common.sh@819 -- # '[' -z 78743 ']' 00:17:27.423 07:10:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.423 07:10:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:27.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.423 07:10:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.423 07:10:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:27.423 07:10:11 -- common/autotest_common.sh@10 -- # set +x 00:17:27.423 [2024-07-11 07:10:11.401068] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:27.423 [2024-07-11 07:10:11.401165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78743 ] 00:17:27.681 [2024-07-11 07:10:11.539019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.681 [2024-07-11 07:10:11.628095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.256 07:10:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:28.256 07:10:12 -- common/autotest_common.sh@852 -- # return 0 00:17:28.256 07:10:12 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:28.529 [2024-07-11 07:10:12.477709] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.529 TLSTESTn1 00:17:28.529 07:10:12 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:28.786 Running I/O for 10 seconds... 00:17:38.748 00:17:38.748 Latency(us) 00:17:38.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.748 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:38.748 Verification LBA range: start 0x0 length 0x2000 00:17:38.748 TLSTESTn1 : 10.02 5900.51 23.05 0.00 0.00 21657.14 5630.14 22639.71 00:17:38.748 =================================================================================================================== 00:17:38.749 Total : 5900.51 23.05 0.00 0.00 21657.14 5630.14 22639.71 00:17:38.749 0 00:17:38.749 07:10:22 -- fips/fips.sh@1 -- # cleanup 00:17:38.749 07:10:22 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:38.749 07:10:22 -- common/autotest_common.sh@796 -- # type=--id 00:17:38.749 07:10:22 -- common/autotest_common.sh@797 -- # id=0 00:17:38.749 07:10:22 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:38.749 07:10:22 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:38.749 07:10:22 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:38.749 07:10:22 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:38.749 07:10:22 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:38.749 07:10:22 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:38.749 nvmf_trace.0 00:17:38.749 07:10:22 -- common/autotest_common.sh@811 -- # return 0 00:17:38.749 07:10:22 -- fips/fips.sh@16 -- # killprocess 78743 00:17:38.749 07:10:22 -- common/autotest_common.sh@926 -- # '[' -z 78743 ']' 00:17:38.749 07:10:22 -- common/autotest_common.sh@930 -- # kill -0 78743 00:17:38.749 07:10:22 -- common/autotest_common.sh@931 -- # uname 00:17:38.749 07:10:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.749 07:10:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78743 00:17:38.749 07:10:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:38.749 07:10:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:38.749 killing process with pid 78743 00:17:38.749 07:10:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78743' 00:17:38.749 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.749 00:17:38.749 Latency(us) 00:17:38.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.749 =================================================================================================================== 00:17:38.749 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:38.749 07:10:22 -- common/autotest_common.sh@945 -- # kill 78743 00:17:38.749 07:10:22 -- common/autotest_common.sh@950 -- # wait 78743 00:17:39.315 07:10:23 -- fips/fips.sh@17 -- # nvmftestfini 00:17:39.315 07:10:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:39.315 07:10:23 -- nvmf/common.sh@116 -- # sync 00:17:39.315 07:10:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:39.315 07:10:23 -- nvmf/common.sh@119 -- # set +e 00:17:39.315 07:10:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:39.315 07:10:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:39.315 rmmod nvme_tcp 00:17:39.315 rmmod nvme_fabrics 00:17:39.315 rmmod nvme_keyring 00:17:39.315 07:10:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:39.315 07:10:23 -- nvmf/common.sh@123 -- # set -e 00:17:39.315 07:10:23 -- nvmf/common.sh@124 -- # return 0 00:17:39.315 07:10:23 -- nvmf/common.sh@477 -- # '[' -n 78691 ']' 00:17:39.315 07:10:23 -- nvmf/common.sh@478 -- # killprocess 78691 00:17:39.315 07:10:23 -- common/autotest_common.sh@926 -- # '[' -z 78691 ']' 00:17:39.315 07:10:23 -- common/autotest_common.sh@930 -- # kill -0 78691 00:17:39.315 07:10:23 -- common/autotest_common.sh@931 -- # uname 00:17:39.315 07:10:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:39.315 07:10:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78691 00:17:39.315 07:10:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:39.315 killing process with pid 78691 00:17:39.315 07:10:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:39.315 07:10:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78691' 00:17:39.315 07:10:23 -- common/autotest_common.sh@945 -- # kill 78691 00:17:39.315 07:10:23 -- common/autotest_common.sh@950 -- # wait 78691 00:17:39.574 07:10:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:39.574 07:10:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:39.574 07:10:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:39.574 07:10:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:39.574 07:10:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:39.574 07:10:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.574 07:10:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.574 07:10:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.574 07:10:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:39.574 07:10:23 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:39.574 00:17:39.574 real 0m14.178s 00:17:39.574 user 0m17.750s 00:17:39.574 sys 0m6.645s 00:17:39.575 07:10:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.575 ************************************ 00:17:39.575 END TEST nvmf_fips 00:17:39.575 07:10:23 -- common/autotest_common.sh@10 -- # set +x 00:17:39.575 ************************************ 00:17:39.575 07:10:23 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:39.575 07:10:23 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:39.575 07:10:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:39.575 07:10:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:39.575 07:10:23 -- common/autotest_common.sh@10 -- # set +x 00:17:39.575 ************************************ 00:17:39.575 START TEST nvmf_fuzz 00:17:39.575 ************************************ 00:17:39.575 07:10:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:39.575 * Looking for test storage... 00:17:39.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:39.575 07:10:23 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:39.575 07:10:23 -- nvmf/common.sh@7 -- # uname -s 00:17:39.575 07:10:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.575 07:10:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.575 07:10:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.575 07:10:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.575 07:10:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.575 07:10:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.575 07:10:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.575 07:10:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.575 07:10:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.575 07:10:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.575 07:10:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:17:39.575 07:10:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:17:39.575 07:10:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.575 07:10:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.575 07:10:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:39.575 07:10:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:39.575 07:10:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.575 07:10:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.575 07:10:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.575 07:10:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.575 07:10:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.575 07:10:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.575 07:10:23 -- paths/export.sh@5 -- # export PATH 00:17:39.575 07:10:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.575 07:10:23 -- nvmf/common.sh@46 -- # : 0 00:17:39.575 07:10:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:39.575 07:10:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:39.575 07:10:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:39.575 07:10:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.833 07:10:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.833 07:10:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:39.833 07:10:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:39.833 07:10:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:39.833 07:10:23 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:39.833 07:10:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:39.833 07:10:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.833 07:10:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:39.833 07:10:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:39.833 07:10:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:39.833 07:10:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.833 07:10:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.833 07:10:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.833 07:10:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:39.833 07:10:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:39.833 07:10:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:39.833 07:10:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:39.833 07:10:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:39.833 07:10:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:39.833 07:10:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.833 07:10:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.833 07:10:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:39.833 07:10:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:39.833 07:10:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:39.833 07:10:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:39.833 07:10:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:39.833 07:10:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.833 07:10:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:39.833 07:10:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:39.833 07:10:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:39.833 07:10:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:39.833 07:10:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:39.833 07:10:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:39.833 Cannot find device "nvmf_tgt_br" 00:17:39.833 07:10:23 -- nvmf/common.sh@154 -- # true 00:17:39.833 07:10:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:39.833 Cannot find device "nvmf_tgt_br2" 00:17:39.833 07:10:23 -- nvmf/common.sh@155 -- # true 00:17:39.833 07:10:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:39.833 07:10:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:39.833 Cannot find device "nvmf_tgt_br" 00:17:39.833 07:10:23 -- nvmf/common.sh@157 -- # true 00:17:39.833 07:10:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:39.833 Cannot find device "nvmf_tgt_br2" 00:17:39.833 07:10:23 -- nvmf/common.sh@158 -- # true 00:17:39.833 07:10:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:39.833 07:10:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:39.833 07:10:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:39.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.833 07:10:23 -- nvmf/common.sh@161 -- # true 00:17:39.833 07:10:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:39.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.833 07:10:23 -- nvmf/common.sh@162 -- # true 00:17:39.833 07:10:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:39.833 07:10:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:39.833 07:10:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:39.833 07:10:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:39.833 07:10:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:39.833 07:10:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:39.833 07:10:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:39.833 07:10:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:39.833 07:10:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:39.833 07:10:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:39.833 07:10:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:39.833 07:10:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:39.833 07:10:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:39.833 07:10:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:39.833 07:10:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:39.833 07:10:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:40.092 07:10:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:40.092 07:10:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:40.092 07:10:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:40.092 07:10:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:40.092 07:10:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:40.092 07:10:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:40.092 07:10:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:40.092 07:10:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:40.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:17:40.092 00:17:40.092 --- 10.0.0.2 ping statistics --- 00:17:40.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.092 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:40.092 07:10:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:40.092 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:40.092 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:17:40.092 00:17:40.092 --- 10.0.0.3 ping statistics --- 00:17:40.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.092 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:40.092 07:10:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:40.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:40.092 00:17:40.092 --- 10.0.0.1 ping statistics --- 00:17:40.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.092 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:40.092 07:10:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.092 07:10:23 -- nvmf/common.sh@421 -- # return 0 00:17:40.092 07:10:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:40.092 07:10:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.092 07:10:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:40.092 07:10:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:40.092 07:10:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.092 07:10:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:40.092 07:10:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:40.092 07:10:23 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=79085 00:17:40.092 07:10:23 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:40.092 07:10:23 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:40.092 07:10:23 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 79085 00:17:40.092 07:10:23 -- common/autotest_common.sh@819 -- # '[' -z 79085 ']' 00:17:40.092 07:10:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.092 07:10:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:40.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.092 07:10:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.092 07:10:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:40.092 07:10:23 -- common/autotest_common.sh@10 -- # set +x 00:17:41.028 07:10:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:41.028 07:10:25 -- common/autotest_common.sh@852 -- # return 0 00:17:41.028 07:10:25 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:41.028 07:10:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:41.028 07:10:25 -- common/autotest_common.sh@10 -- # set +x 00:17:41.028 07:10:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:41.028 07:10:25 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:41.028 07:10:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:41.028 07:10:25 -- common/autotest_common.sh@10 -- # set +x 00:17:41.028 Malloc0 00:17:41.028 07:10:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:41.028 07:10:25 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:41.028 07:10:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:41.028 07:10:25 -- common/autotest_common.sh@10 -- # set +x 00:17:41.287 07:10:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:41.287 07:10:25 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:41.287 07:10:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:41.287 07:10:25 -- common/autotest_common.sh@10 -- # set +x 00:17:41.287 07:10:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:41.287 07:10:25 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.287 07:10:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:41.287 07:10:25 -- common/autotest_common.sh@10 -- # set +x 00:17:41.287 07:10:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:41.287 07:10:25 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:41.287 07:10:25 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:41.545 Shutting down the fuzz application 00:17:41.545 07:10:25 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:42.112 Shutting down the fuzz application 00:17:42.112 07:10:25 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.112 07:10:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:42.112 07:10:25 -- common/autotest_common.sh@10 -- # set +x 00:17:42.112 07:10:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:42.112 07:10:25 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:42.112 07:10:25 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:42.112 07:10:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:42.112 07:10:25 -- nvmf/common.sh@116 -- # sync 00:17:42.112 07:10:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:42.112 07:10:25 -- nvmf/common.sh@119 -- # set +e 00:17:42.112 07:10:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:42.112 07:10:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:42.112 rmmod nvme_tcp 00:17:42.112 rmmod nvme_fabrics 00:17:42.112 rmmod nvme_keyring 00:17:42.112 07:10:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:42.112 07:10:26 -- nvmf/common.sh@123 -- # set -e 00:17:42.112 07:10:26 -- nvmf/common.sh@124 -- # return 0 00:17:42.112 07:10:26 -- nvmf/common.sh@477 -- # '[' -n 79085 ']' 00:17:42.112 07:10:26 -- nvmf/common.sh@478 -- # killprocess 79085 00:17:42.112 07:10:26 -- common/autotest_common.sh@926 -- # '[' -z 79085 ']' 00:17:42.112 07:10:26 -- common/autotest_common.sh@930 -- # kill -0 79085 00:17:42.112 07:10:26 -- common/autotest_common.sh@931 -- # uname 00:17:42.112 07:10:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:42.112 07:10:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79085 00:17:42.112 killing process with pid 79085 00:17:42.112 07:10:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:42.112 07:10:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:42.112 07:10:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79085' 00:17:42.112 07:10:26 -- common/autotest_common.sh@945 -- # kill 79085 00:17:42.112 07:10:26 -- common/autotest_common.sh@950 -- # wait 79085 00:17:42.370 07:10:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:42.370 07:10:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:42.370 07:10:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:42.370 07:10:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.370 07:10:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:42.370 07:10:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.370 07:10:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.370 07:10:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.370 07:10:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:42.370 07:10:26 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:42.370 00:17:42.370 real 0m2.777s 00:17:42.370 user 0m2.998s 00:17:42.370 sys 0m0.669s 00:17:42.370 07:10:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:42.370 07:10:26 -- common/autotest_common.sh@10 -- # set +x 00:17:42.370 ************************************ 00:17:42.370 END TEST nvmf_fuzz 00:17:42.370 ************************************ 00:17:42.370 07:10:26 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:42.370 07:10:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:42.370 07:10:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:42.370 07:10:26 -- common/autotest_common.sh@10 -- # set +x 00:17:42.370 ************************************ 00:17:42.370 START TEST nvmf_multiconnection 00:17:42.370 ************************************ 00:17:42.370 07:10:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:42.628 * Looking for test storage... 00:17:42.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:42.628 07:10:26 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.628 07:10:26 -- nvmf/common.sh@7 -- # uname -s 00:17:42.628 07:10:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.628 07:10:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.628 07:10:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.628 07:10:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.628 07:10:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.628 07:10:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.628 07:10:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.628 07:10:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.628 07:10:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.628 07:10:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.628 07:10:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:17:42.628 07:10:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:17:42.628 07:10:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.628 07:10:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.628 07:10:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.628 07:10:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.628 07:10:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.628 07:10:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.628 07:10:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.628 07:10:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.628 07:10:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.628 07:10:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.628 07:10:26 -- paths/export.sh@5 -- # export PATH 00:17:42.628 07:10:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.628 07:10:26 -- nvmf/common.sh@46 -- # : 0 00:17:42.628 07:10:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:42.628 07:10:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:42.628 07:10:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:42.628 07:10:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.628 07:10:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.628 07:10:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:42.628 07:10:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:42.628 07:10:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:42.628 07:10:26 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:42.628 07:10:26 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:42.628 07:10:26 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:42.628 07:10:26 -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:42.628 07:10:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:42.628 07:10:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.628 07:10:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:42.628 07:10:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:42.628 07:10:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:42.628 07:10:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.628 07:10:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.628 07:10:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.628 07:10:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:42.628 07:10:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:42.628 07:10:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:42.628 07:10:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:42.628 07:10:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:42.628 07:10:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:42.628 07:10:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.628 07:10:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.628 07:10:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:42.628 07:10:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:42.629 07:10:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.629 07:10:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.629 07:10:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.629 07:10:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.629 07:10:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.629 07:10:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.629 07:10:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.629 07:10:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.629 07:10:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:42.629 07:10:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:42.629 Cannot find device "nvmf_tgt_br" 00:17:42.629 07:10:26 -- nvmf/common.sh@154 -- # true 00:17:42.629 07:10:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.629 Cannot find device "nvmf_tgt_br2" 00:17:42.629 07:10:26 -- nvmf/common.sh@155 -- # true 00:17:42.629 07:10:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:42.629 07:10:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:42.629 Cannot find device "nvmf_tgt_br" 00:17:42.629 07:10:26 -- nvmf/common.sh@157 -- # true 00:17:42.629 07:10:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:42.629 Cannot find device "nvmf_tgt_br2" 00:17:42.629 07:10:26 -- nvmf/common.sh@158 -- # true 00:17:42.629 07:10:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:42.629 07:10:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:42.629 07:10:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.629 07:10:26 -- nvmf/common.sh@161 -- # true 00:17:42.629 07:10:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.629 07:10:26 -- nvmf/common.sh@162 -- # true 00:17:42.629 07:10:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:42.629 07:10:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:42.629 07:10:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:42.629 07:10:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:42.629 07:10:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:42.887 07:10:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:42.887 07:10:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:42.888 07:10:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:42.888 07:10:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:42.888 07:10:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:42.888 07:10:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:42.888 07:10:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:42.888 07:10:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:42.888 07:10:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:42.888 07:10:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:42.888 07:10:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:42.888 07:10:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:42.888 07:10:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:42.888 07:10:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:42.888 07:10:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:42.888 07:10:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:42.888 07:10:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:42.888 07:10:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:42.888 07:10:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:42.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:17:42.888 00:17:42.888 --- 10.0.0.2 ping statistics --- 00:17:42.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.888 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:42.888 07:10:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:42.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:42.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:42.888 00:17:42.888 --- 10.0.0.3 ping statistics --- 00:17:42.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.888 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:42.888 07:10:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:42.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:17:42.888 00:17:42.888 --- 10.0.0.1 ping statistics --- 00:17:42.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.888 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:42.888 07:10:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.888 07:10:26 -- nvmf/common.sh@421 -- # return 0 00:17:42.888 07:10:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:42.888 07:10:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.888 07:10:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:42.888 07:10:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:42.888 07:10:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.888 07:10:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:42.888 07:10:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:42.888 07:10:26 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:42.888 07:10:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:42.888 07:10:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:42.888 07:10:26 -- common/autotest_common.sh@10 -- # set +x 00:17:42.888 07:10:26 -- nvmf/common.sh@469 -- # nvmfpid=79302 00:17:42.888 07:10:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:42.888 07:10:26 -- nvmf/common.sh@470 -- # waitforlisten 79302 00:17:42.888 07:10:26 -- common/autotest_common.sh@819 -- # '[' -z 79302 ']' 00:17:42.888 07:10:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.888 07:10:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:42.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.888 07:10:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.888 07:10:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:42.888 07:10:26 -- common/autotest_common.sh@10 -- # set +x 00:17:42.888 [2024-07-11 07:10:26.929688] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:42.888 [2024-07-11 07:10:26.930468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.146 [2024-07-11 07:10:27.072320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.146 [2024-07-11 07:10:27.193540] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:43.146 [2024-07-11 07:10:27.193751] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.146 [2024-07-11 07:10:27.193773] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.146 [2024-07-11 07:10:27.193786] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.146 [2024-07-11 07:10:27.194354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.146 [2024-07-11 07:10:27.194530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.146 [2024-07-11 07:10:27.194665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.146 [2024-07-11 07:10:27.194742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.081 07:10:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:44.081 07:10:27 -- common/autotest_common.sh@852 -- # return 0 00:17:44.081 07:10:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:44.081 07:10:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:44.081 07:10:27 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 07:10:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.081 07:10:27 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:44.081 07:10:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:27 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 [2024-07-11 07:10:27.934365] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.081 07:10:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.081 07:10:27 -- target/multiconnection.sh@21 -- # seq 1 11 00:17:44.081 07:10:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:44.081 07:10:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:44.081 07:10:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:27 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 Malloc1 00:17:44.081 07:10:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.081 07:10:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:44.081 07:10:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:27 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.081 07:10:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:44.081 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.081 07:10:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.081 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 [2024-07-11 07:10:28.017517] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.081 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.081 07:10:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:44.081 07:10:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:44.081 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 Malloc2 00:17:44.081 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.081 07:10:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:44.081 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.081 07:10:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:44.081 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.081 07:10:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:44.081 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.081 07:10:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:44.081 07:10:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:44.081 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 Malloc3 00:17:44.081 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.081 07:10:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:44.081 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.081 07:10:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:44.081 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.081 07:10:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:44.081 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.081 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.081 07:10:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:44.081 07:10:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:44.081 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.081 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 Malloc4 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:44.340 07:10:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 Malloc5 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:44.340 07:10:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 Malloc6 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:44.340 07:10:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 Malloc7 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:44.340 07:10:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 Malloc8 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.340 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.340 07:10:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:44.340 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.340 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:17:44.599 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.599 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:44.599 07:10:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:44.599 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.599 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 Malloc9 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:44.599 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.599 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:44.599 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.599 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:17:44.599 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.599 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:44.599 07:10:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:44.599 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.599 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 Malloc10 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:44.599 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.599 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:44.599 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.599 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:17:44.599 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.599 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:44.599 07:10:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:44.599 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.599 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 Malloc11 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:44.599 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.599 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:44.599 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.599 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:17:44.599 07:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.599 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:44.599 07:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.599 07:10:28 -- target/multiconnection.sh@28 -- # seq 1 11 00:17:44.599 07:10:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:44.599 07:10:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:44.858 07:10:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:44.858 07:10:28 -- common/autotest_common.sh@1177 -- # local i=0 00:17:44.858 07:10:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:44.858 07:10:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:44.858 07:10:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:46.760 07:10:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:46.760 07:10:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:46.760 07:10:30 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:17:46.760 07:10:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:46.760 07:10:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:46.760 07:10:30 -- common/autotest_common.sh@1187 -- # return 0 00:17:46.760 07:10:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:46.760 07:10:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:17:47.017 07:10:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:17:47.018 07:10:30 -- common/autotest_common.sh@1177 -- # local i=0 00:17:47.018 07:10:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:47.018 07:10:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:47.018 07:10:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:48.920 07:10:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:48.920 07:10:32 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:17:48.920 07:10:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:48.920 07:10:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:48.920 07:10:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:48.920 07:10:32 -- common/autotest_common.sh@1187 -- # return 0 00:17:48.920 07:10:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:48.920 07:10:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:17:49.178 07:10:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:17:49.178 07:10:33 -- common/autotest_common.sh@1177 -- # local i=0 00:17:49.178 07:10:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.178 07:10:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:49.178 07:10:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:51.098 07:10:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:51.098 07:10:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:51.098 07:10:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:17:51.098 07:10:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:51.098 07:10:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.098 07:10:35 -- common/autotest_common.sh@1187 -- # return 0 00:17:51.098 07:10:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:51.098 07:10:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:17:51.356 07:10:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:17:51.356 07:10:35 -- common/autotest_common.sh@1177 -- # local i=0 00:17:51.356 07:10:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:51.356 07:10:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:51.356 07:10:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:53.885 07:10:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:53.885 07:10:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:53.885 07:10:37 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:17:53.885 07:10:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:53.885 07:10:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:53.885 07:10:37 -- common/autotest_common.sh@1187 -- # return 0 00:17:53.885 07:10:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.885 07:10:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:17:53.885 07:10:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:17:53.885 07:10:37 -- common/autotest_common.sh@1177 -- # local i=0 00:17:53.885 07:10:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:53.885 07:10:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:53.885 07:10:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:55.790 07:10:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:55.790 07:10:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:55.790 07:10:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:17:55.790 07:10:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:55.790 07:10:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:55.790 07:10:39 -- common/autotest_common.sh@1187 -- # return 0 00:17:55.790 07:10:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:55.790 07:10:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:17:55.790 07:10:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:17:55.790 07:10:39 -- common/autotest_common.sh@1177 -- # local i=0 00:17:55.790 07:10:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.790 07:10:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:55.790 07:10:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:57.694 07:10:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:57.694 07:10:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:57.694 07:10:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:17:57.694 07:10:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:57.694 07:10:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:57.694 07:10:41 -- common/autotest_common.sh@1187 -- # return 0 00:17:57.694 07:10:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:57.694 07:10:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:17:57.953 07:10:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:17:57.953 07:10:41 -- common/autotest_common.sh@1177 -- # local i=0 00:17:57.953 07:10:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:57.953 07:10:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:57.953 07:10:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:00.485 07:10:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:00.485 07:10:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:00.485 07:10:43 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:18:00.485 07:10:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:00.485 07:10:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:00.485 07:10:43 -- common/autotest_common.sh@1187 -- # return 0 00:18:00.485 07:10:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:00.486 07:10:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:00.486 07:10:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:00.486 07:10:44 -- common/autotest_common.sh@1177 -- # local i=0 00:18:00.486 07:10:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:00.486 07:10:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:00.486 07:10:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:02.423 07:10:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:02.423 07:10:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:02.423 07:10:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:18:02.423 07:10:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:02.423 07:10:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:02.423 07:10:46 -- common/autotest_common.sh@1187 -- # return 0 00:18:02.423 07:10:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:02.423 07:10:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:02.423 07:10:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:02.423 07:10:46 -- common/autotest_common.sh@1177 -- # local i=0 00:18:02.423 07:10:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:02.423 07:10:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:02.423 07:10:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:04.327 07:10:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:04.327 07:10:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:04.327 07:10:48 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:18:04.327 07:10:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:04.327 07:10:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:04.327 07:10:48 -- common/autotest_common.sh@1187 -- # return 0 00:18:04.327 07:10:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:04.327 07:10:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:04.586 07:10:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:04.586 07:10:48 -- common/autotest_common.sh@1177 -- # local i=0 00:18:04.586 07:10:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:04.586 07:10:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:04.586 07:10:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:06.489 07:10:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:06.489 07:10:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:06.489 07:10:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:18:06.489 07:10:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:06.489 07:10:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.489 07:10:50 -- common/autotest_common.sh@1187 -- # return 0 00:18:06.489 07:10:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:06.489 07:10:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:06.748 07:10:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:06.748 07:10:50 -- common/autotest_common.sh@1177 -- # local i=0 00:18:06.748 07:10:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.748 07:10:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:06.748 07:10:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:09.276 07:10:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:09.276 07:10:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:09.276 07:10:52 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:18:09.276 07:10:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:09.276 07:10:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.276 07:10:52 -- common/autotest_common.sh@1187 -- # return 0 00:18:09.276 07:10:52 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:09.276 [global] 00:18:09.276 thread=1 00:18:09.276 invalidate=1 00:18:09.276 rw=read 00:18:09.276 time_based=1 00:18:09.276 runtime=10 00:18:09.276 ioengine=libaio 00:18:09.276 direct=1 00:18:09.276 bs=262144 00:18:09.276 iodepth=64 00:18:09.276 norandommap=1 00:18:09.276 numjobs=1 00:18:09.276 00:18:09.276 [job0] 00:18:09.276 filename=/dev/nvme0n1 00:18:09.276 [job1] 00:18:09.276 filename=/dev/nvme10n1 00:18:09.276 [job2] 00:18:09.276 filename=/dev/nvme1n1 00:18:09.276 [job3] 00:18:09.276 filename=/dev/nvme2n1 00:18:09.276 [job4] 00:18:09.276 filename=/dev/nvme3n1 00:18:09.276 [job5] 00:18:09.276 filename=/dev/nvme4n1 00:18:09.276 [job6] 00:18:09.276 filename=/dev/nvme5n1 00:18:09.276 [job7] 00:18:09.276 filename=/dev/nvme6n1 00:18:09.276 [job8] 00:18:09.276 filename=/dev/nvme7n1 00:18:09.276 [job9] 00:18:09.276 filename=/dev/nvme8n1 00:18:09.276 [job10] 00:18:09.276 filename=/dev/nvme9n1 00:18:09.276 Could not set queue depth (nvme0n1) 00:18:09.276 Could not set queue depth (nvme10n1) 00:18:09.276 Could not set queue depth (nvme1n1) 00:18:09.276 Could not set queue depth (nvme2n1) 00:18:09.276 Could not set queue depth (nvme3n1) 00:18:09.276 Could not set queue depth (nvme4n1) 00:18:09.276 Could not set queue depth (nvme5n1) 00:18:09.276 Could not set queue depth (nvme6n1) 00:18:09.276 Could not set queue depth (nvme7n1) 00:18:09.276 Could not set queue depth (nvme8n1) 00:18:09.276 Could not set queue depth (nvme9n1) 00:18:09.276 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:09.276 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:09.276 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:09.276 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:09.276 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:09.276 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:09.276 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:09.276 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:09.276 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:09.276 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:09.276 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:09.276 fio-3.35 00:18:09.276 Starting 11 threads 00:18:21.479 00:18:21.479 job0: (groupid=0, jobs=1): err= 0: pid=79773: Thu Jul 11 07:11:03 2024 00:18:21.479 read: IOPS=383, BW=95.8MiB/s (100MB/s)(970MiB/10130msec) 00:18:21.479 slat (usec): min=21, max=123933, avg=2576.14, stdev=11720.88 00:18:21.479 clat (msec): min=45, max=296, avg=164.25, stdev=22.94 00:18:21.479 lat (msec): min=45, max=308, avg=166.83, stdev=25.73 00:18:21.479 clat percentiles (msec): 00:18:21.479 | 1.00th=[ 93], 5.00th=[ 114], 10.00th=[ 140], 20.00th=[ 153], 00:18:21.479 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 171], 00:18:21.479 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 192], 00:18:21.479 | 99.00th=[ 203], 99.50th=[ 239], 99.90th=[ 275], 99.95th=[ 284], 00:18:21.479 | 99.99th=[ 296] 00:18:21.479 bw ( KiB/s): min=77979, max=128000, per=6.14%, avg=97667.80, stdev=9359.58, samples=20 00:18:21.479 iops : min= 304, max= 500, avg=381.45, stdev=36.63, samples=20 00:18:21.479 lat (msec) : 50=0.13%, 100=1.86%, 250=97.60%, 500=0.41% 00:18:21.479 cpu : usr=0.15%, sys=1.50%, ctx=782, majf=0, minf=4097 00:18:21.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:21.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:21.479 issued rwts: total=3880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.479 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:21.479 job1: (groupid=0, jobs=1): err= 0: pid=79774: Thu Jul 11 07:11:03 2024 00:18:21.479 read: IOPS=558, BW=140MiB/s (146MB/s)(1408MiB/10079msec) 00:18:21.479 slat (usec): min=22, max=125103, avg=1772.51, stdev=6392.27 00:18:21.479 clat (msec): min=22, max=218, avg=112.58, stdev=25.19 00:18:21.479 lat (msec): min=24, max=289, avg=114.36, stdev=26.09 00:18:21.479 clat percentiles (msec): 00:18:21.479 | 1.00th=[ 73], 5.00th=[ 83], 10.00th=[ 87], 20.00th=[ 93], 00:18:21.479 | 30.00th=[ 99], 40.00th=[ 103], 50.00th=[ 107], 60.00th=[ 111], 00:18:21.479 | 70.00th=[ 117], 80.00th=[ 134], 90.00th=[ 150], 95.00th=[ 163], 00:18:21.479 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 211], 99.95th=[ 215], 00:18:21.479 | 99.99th=[ 220] 00:18:21.480 bw ( KiB/s): min=85504, max=174080, per=8.96%, avg=142482.65, stdev=26779.10, samples=20 00:18:21.480 iops : min= 334, max= 680, avg=556.45, stdev=104.55, samples=20 00:18:21.480 lat (msec) : 50=0.04%, 100=36.37%, 250=63.59% 00:18:21.480 cpu : usr=0.25%, sys=2.19%, ctx=1056, majf=0, minf=4097 00:18:21.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:21.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:21.480 issued rwts: total=5631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:21.480 job2: (groupid=0, jobs=1): err= 0: pid=79775: Thu Jul 11 07:11:03 2024 00:18:21.480 read: IOPS=573, BW=143MiB/s (150MB/s)(1446MiB/10082msec) 00:18:21.480 slat (usec): min=22, max=95196, avg=1725.81, stdev=6267.69 00:18:21.480 clat (msec): min=14, max=197, avg=109.71, stdev=25.07 00:18:21.480 lat (msec): min=15, max=268, avg=111.44, stdev=25.84 00:18:21.480 clat percentiles (msec): 00:18:21.480 | 1.00th=[ 67], 5.00th=[ 80], 10.00th=[ 85], 20.00th=[ 91], 00:18:21.480 | 30.00th=[ 96], 40.00th=[ 100], 50.00th=[ 104], 60.00th=[ 108], 00:18:21.480 | 70.00th=[ 114], 80.00th=[ 132], 90.00th=[ 148], 95.00th=[ 161], 00:18:21.480 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 194], 99.95th=[ 197], 00:18:21.480 | 99.99th=[ 199] 00:18:21.480 bw ( KiB/s): min=99527, max=181760, per=9.21%, avg=146382.95, stdev=27253.93, samples=20 00:18:21.480 iops : min= 388, max= 710, avg=571.70, stdev=106.48, samples=20 00:18:21.480 lat (msec) : 20=0.10%, 100=41.59%, 250=58.30% 00:18:21.480 cpu : usr=0.18%, sys=2.01%, ctx=1054, majf=0, minf=4097 00:18:21.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:21.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:21.480 issued rwts: total=5782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:21.480 job3: (groupid=0, jobs=1): err= 0: pid=79776: Thu Jul 11 07:11:03 2024 00:18:21.480 read: IOPS=995, BW=249MiB/s (261MB/s)(2508MiB/10073msec) 00:18:21.480 slat (usec): min=21, max=68094, avg=994.68, stdev=3968.91 00:18:21.480 clat (msec): min=17, max=158, avg=63.17, stdev=30.59 00:18:21.480 lat (msec): min=17, max=166, avg=64.16, stdev=31.19 00:18:21.480 clat percentiles (msec): 00:18:21.480 | 1.00th=[ 22], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 34], 00:18:21.480 | 30.00th=[ 38], 40.00th=[ 43], 50.00th=[ 51], 60.00th=[ 77], 00:18:21.480 | 70.00th=[ 89], 80.00th=[ 97], 90.00th=[ 106], 95.00th=[ 111], 00:18:21.480 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 129], 99.95th=[ 138], 00:18:21.480 | 99.99th=[ 159] 00:18:21.480 bw ( KiB/s): min=144384, max=460800, per=16.05%, avg=255163.90, stdev=126948.76, samples=20 00:18:21.480 iops : min= 564, max= 1800, avg=996.65, stdev=495.96, samples=20 00:18:21.480 lat (msec) : 20=0.22%, 50=49.59%, 100=33.05%, 250=17.14% 00:18:21.480 cpu : usr=0.46%, sys=3.05%, ctx=2079, majf=0, minf=4097 00:18:21.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:21.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:21.480 issued rwts: total=10032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:21.480 job4: (groupid=0, jobs=1): err= 0: pid=79777: Thu Jul 11 07:11:03 2024 00:18:21.480 read: IOPS=566, BW=142MiB/s (149MB/s)(1427MiB/10074msec) 00:18:21.480 slat (usec): min=21, max=70527, avg=1747.32, stdev=6334.78 00:18:21.480 clat (msec): min=67, max=232, avg=111.04, stdev=24.05 00:18:21.480 lat (msec): min=67, max=232, avg=112.79, stdev=24.94 00:18:21.480 clat percentiles (msec): 00:18:21.480 | 1.00th=[ 74], 5.00th=[ 83], 10.00th=[ 87], 20.00th=[ 92], 00:18:21.480 | 30.00th=[ 97], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 109], 00:18:21.480 | 70.00th=[ 116], 80.00th=[ 131], 90.00th=[ 150], 95.00th=[ 159], 00:18:21.480 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 203], 99.95th=[ 203], 00:18:21.480 | 99.99th=[ 232] 00:18:21.480 bw ( KiB/s): min=77979, max=173568, per=9.09%, avg=144527.90, stdev=27962.12, samples=20 00:18:21.480 iops : min= 304, max= 678, avg=564.45, stdev=109.24, samples=20 00:18:21.480 lat (msec) : 100=38.03%, 250=61.97% 00:18:21.480 cpu : usr=0.20%, sys=2.11%, ctx=1110, majf=0, minf=4097 00:18:21.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:21.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:21.480 issued rwts: total=5709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:21.480 job5: (groupid=0, jobs=1): err= 0: pid=79778: Thu Jul 11 07:11:03 2024 00:18:21.480 read: IOPS=368, BW=92.2MiB/s (96.6MB/s)(934MiB/10135msec) 00:18:21.480 slat (usec): min=22, max=132582, avg=2629.12, stdev=10598.23 00:18:21.480 clat (msec): min=17, max=284, avg=170.70, stdev=23.72 00:18:21.480 lat (msec): min=18, max=288, avg=173.32, stdev=25.81 00:18:21.480 clat percentiles (msec): 00:18:21.480 | 1.00th=[ 73], 5.00th=[ 146], 10.00th=[ 153], 20.00th=[ 159], 00:18:21.480 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 171], 60.00th=[ 176], 00:18:21.480 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 201], 00:18:21.480 | 99.00th=[ 262], 99.50th=[ 271], 99.90th=[ 284], 99.95th=[ 284], 00:18:21.480 | 99.99th=[ 284] 00:18:21.480 bw ( KiB/s): min=81408, max=101376, per=5.91%, avg=93993.45, stdev=4925.41, samples=20 00:18:21.480 iops : min= 318, max= 396, avg=367.10, stdev=19.17, samples=20 00:18:21.480 lat (msec) : 20=0.05%, 50=0.11%, 100=1.69%, 250=96.68%, 500=1.47% 00:18:21.480 cpu : usr=0.09%, sys=1.34%, ctx=787, majf=0, minf=4097 00:18:21.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:21.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:21.480 issued rwts: total=3736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:21.480 job6: (groupid=0, jobs=1): err= 0: pid=79779: Thu Jul 11 07:11:03 2024 00:18:21.480 read: IOPS=376, BW=94.1MiB/s (98.7MB/s)(954MiB/10136msec) 00:18:21.480 slat (usec): min=22, max=110270, avg=2627.46, stdev=9714.39 00:18:21.480 clat (msec): min=23, max=290, avg=167.04, stdev=28.16 00:18:21.480 lat (msec): min=23, max=290, avg=169.67, stdev=29.89 00:18:21.480 clat percentiles (msec): 00:18:21.480 | 1.00th=[ 42], 5.00th=[ 110], 10.00th=[ 142], 20.00th=[ 157], 00:18:21.480 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 171], 60.00th=[ 176], 00:18:21.480 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 199], 00:18:21.480 | 99.00th=[ 218], 99.50th=[ 247], 99.90th=[ 259], 99.95th=[ 262], 00:18:21.480 | 99.99th=[ 292] 00:18:21.480 bw ( KiB/s): min=73728, max=137728, per=6.04%, avg=96056.95, stdev=12927.91, samples=20 00:18:21.480 iops : min= 288, max= 538, avg=375.20, stdev=50.50, samples=20 00:18:21.480 lat (msec) : 50=1.60%, 100=0.81%, 250=97.20%, 500=0.39% 00:18:21.480 cpu : usr=0.20%, sys=1.50%, ctx=678, majf=0, minf=4097 00:18:21.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:18:21.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:21.480 issued rwts: total=3817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:21.480 job7: (groupid=0, jobs=1): err= 0: pid=79780: Thu Jul 11 07:11:03 2024 00:18:21.480 read: IOPS=1079, BW=270MiB/s (283MB/s)(2720MiB/10081msec) 00:18:21.480 slat (usec): min=21, max=71829, avg=909.79, stdev=3894.60 00:18:21.480 clat (msec): min=16, max=183, avg=58.28, stdev=32.04 00:18:21.480 lat (msec): min=16, max=183, avg=59.19, stdev=32.63 00:18:21.480 clat percentiles (msec): 00:18:21.480 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 33], 00:18:21.480 | 30.00th=[ 36], 40.00th=[ 39], 50.00th=[ 41], 60.00th=[ 46], 00:18:21.480 | 70.00th=[ 87], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 112], 00:18:21.480 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 167], 99.95th=[ 184], 00:18:21.480 | 99.99th=[ 184] 00:18:21.480 bw ( KiB/s): min=152064, max=471552, per=17.42%, avg=276907.80, stdev=141238.65, samples=20 00:18:21.480 iops : min= 594, max= 1842, avg=1081.60, stdev=551.77, samples=20 00:18:21.480 lat (msec) : 20=0.80%, 50=61.91%, 100=19.48%, 250=17.81% 00:18:21.480 cpu : usr=0.40%, sys=3.32%, ctx=2195, majf=0, minf=4097 00:18:21.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:21.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:21.480 issued rwts: total=10881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:21.480 job8: (groupid=0, jobs=1): err= 0: pid=79781: Thu Jul 11 07:11:03 2024 00:18:21.480 read: IOPS=368, BW=92.1MiB/s (96.5MB/s)(933MiB/10136msec) 00:18:21.480 slat (usec): min=22, max=90666, avg=2676.66, stdev=8809.57 00:18:21.480 clat (msec): min=24, max=309, avg=170.75, stdev=26.07 00:18:21.480 lat (msec): min=25, max=309, avg=173.43, stdev=27.61 00:18:21.480 clat percentiles (msec): 00:18:21.480 | 1.00th=[ 70], 5.00th=[ 123], 10.00th=[ 146], 20.00th=[ 159], 00:18:21.480 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:18:21.480 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 199], 95.00th=[ 203], 00:18:21.480 | 99.00th=[ 220], 99.50th=[ 228], 99.90th=[ 253], 99.95th=[ 309], 00:18:21.480 | 99.99th=[ 309] 00:18:21.480 bw ( KiB/s): min=78848, max=128512, per=5.91%, avg=93933.45, stdev=9925.50, samples=20 00:18:21.480 iops : min= 308, max= 502, avg=366.90, stdev=38.78, samples=20 00:18:21.480 lat (msec) : 50=0.35%, 100=1.69%, 250=97.83%, 500=0.13% 00:18:21.480 cpu : usr=0.17%, sys=1.29%, ctx=843, majf=0, minf=4097 00:18:21.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:21.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:21.480 issued rwts: total=3733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:21.480 job9: (groupid=0, jobs=1): err= 0: pid=79782: Thu Jul 11 07:11:03 2024 00:18:21.480 read: IOPS=370, BW=92.7MiB/s (97.2MB/s)(939MiB/10136msec) 00:18:21.480 slat (usec): min=21, max=78048, avg=2639.03, stdev=8368.07 00:18:21.480 clat (msec): min=20, max=316, avg=169.70, stdev=37.91 00:18:21.480 lat (msec): min=20, max=316, avg=172.34, stdev=39.11 00:18:21.480 clat percentiles (msec): 00:18:21.480 | 1.00th=[ 58], 5.00th=[ 70], 10.00th=[ 121], 20.00th=[ 161], 00:18:21.480 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:18:21.480 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 201], 95.00th=[ 211], 00:18:21.481 | 99.00th=[ 230], 99.50th=[ 243], 99.90th=[ 305], 99.95th=[ 317], 00:18:21.481 | 99.99th=[ 317] 00:18:21.481 bw ( KiB/s): min=77312, max=189440, per=5.95%, avg=94522.50, stdev=23006.39, samples=20 00:18:21.481 iops : min= 302, max= 740, avg=369.20, stdev=89.87, samples=20 00:18:21.481 lat (msec) : 50=0.85%, 100=8.97%, 250=89.83%, 500=0.35% 00:18:21.481 cpu : usr=0.14%, sys=1.31%, ctx=808, majf=0, minf=4097 00:18:21.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:21.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:21.481 issued rwts: total=3757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:21.481 job10: (groupid=0, jobs=1): err= 0: pid=79783: Thu Jul 11 07:11:03 2024 00:18:21.481 read: IOPS=593, BW=148MiB/s (156MB/s)(1496MiB/10076msec) 00:18:21.481 slat (usec): min=17, max=135968, avg=1636.43, stdev=6611.34 00:18:21.481 clat (msec): min=12, max=307, avg=105.96, stdev=29.69 00:18:21.481 lat (msec): min=12, max=307, avg=107.60, stdev=30.62 00:18:21.481 clat percentiles (msec): 00:18:21.481 | 1.00th=[ 53], 5.00th=[ 66], 10.00th=[ 72], 20.00th=[ 84], 00:18:21.481 | 30.00th=[ 92], 40.00th=[ 96], 50.00th=[ 102], 60.00th=[ 106], 00:18:21.481 | 70.00th=[ 114], 80.00th=[ 130], 90.00th=[ 153], 95.00th=[ 163], 00:18:21.481 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 201], 99.95th=[ 241], 00:18:21.481 | 99.99th=[ 309] 00:18:21.481 bw ( KiB/s): min=80384, max=221696, per=9.53%, avg=151570.65, stdev=36779.58, samples=20 00:18:21.481 iops : min= 314, max= 866, avg=591.90, stdev=143.71, samples=20 00:18:21.481 lat (msec) : 20=0.08%, 50=0.77%, 100=46.72%, 250=52.40%, 500=0.03% 00:18:21.481 cpu : usr=0.23%, sys=2.05%, ctx=1181, majf=0, minf=4097 00:18:21.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:21.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:21.481 issued rwts: total=5985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:21.481 00:18:21.481 Run status group 0 (all jobs): 00:18:21.481 READ: bw=1552MiB/s (1628MB/s), 92.1MiB/s-270MiB/s (96.5MB/s-283MB/s), io=15.4GiB (16.5GB), run=10073-10136msec 00:18:21.481 00:18:21.481 Disk stats (read/write): 00:18:21.481 nvme0n1: ios=7632/0, merge=0/0, ticks=1242590/0, in_queue=1242590, util=97.70% 00:18:21.481 nvme10n1: ios=11150/0, merge=0/0, ticks=1242315/0, in_queue=1242315, util=97.82% 00:18:21.481 nvme1n1: ios=11490/0, merge=0/0, ticks=1241847/0, in_queue=1241847, util=97.90% 00:18:21.481 nvme2n1: ios=19941/0, merge=0/0, ticks=1232652/0, in_queue=1232652, util=98.02% 00:18:21.481 nvme3n1: ios=11314/0, merge=0/0, ticks=1242127/0, in_queue=1242127, util=98.12% 00:18:21.481 nvme4n1: ios=7372/0, merge=0/0, ticks=1236106/0, in_queue=1236106, util=98.28% 00:18:21.481 nvme5n1: ios=7507/0, merge=0/0, ticks=1237384/0, in_queue=1237384, util=98.46% 00:18:21.481 nvme6n1: ios=21684/0, merge=0/0, ticks=1228778/0, in_queue=1228778, util=98.28% 00:18:21.481 nvme7n1: ios=7346/0, merge=0/0, ticks=1240637/0, in_queue=1240637, util=98.80% 00:18:21.481 nvme8n1: ios=7408/0, merge=0/0, ticks=1241970/0, in_queue=1241970, util=98.95% 00:18:21.481 nvme9n1: ios=11856/0, merge=0/0, ticks=1239407/0, in_queue=1239407, util=98.76% 00:18:21.481 07:11:03 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:21.481 [global] 00:18:21.481 thread=1 00:18:21.481 invalidate=1 00:18:21.481 rw=randwrite 00:18:21.481 time_based=1 00:18:21.481 runtime=10 00:18:21.481 ioengine=libaio 00:18:21.481 direct=1 00:18:21.481 bs=262144 00:18:21.481 iodepth=64 00:18:21.481 norandommap=1 00:18:21.481 numjobs=1 00:18:21.481 00:18:21.481 [job0] 00:18:21.481 filename=/dev/nvme0n1 00:18:21.481 [job1] 00:18:21.481 filename=/dev/nvme10n1 00:18:21.481 [job2] 00:18:21.481 filename=/dev/nvme1n1 00:18:21.481 [job3] 00:18:21.481 filename=/dev/nvme2n1 00:18:21.481 [job4] 00:18:21.481 filename=/dev/nvme3n1 00:18:21.481 [job5] 00:18:21.481 filename=/dev/nvme4n1 00:18:21.481 [job6] 00:18:21.481 filename=/dev/nvme5n1 00:18:21.481 [job7] 00:18:21.481 filename=/dev/nvme6n1 00:18:21.481 [job8] 00:18:21.481 filename=/dev/nvme7n1 00:18:21.481 [job9] 00:18:21.481 filename=/dev/nvme8n1 00:18:21.481 [job10] 00:18:21.481 filename=/dev/nvme9n1 00:18:21.481 Could not set queue depth (nvme0n1) 00:18:21.481 Could not set queue depth (nvme10n1) 00:18:21.481 Could not set queue depth (nvme1n1) 00:18:21.481 Could not set queue depth (nvme2n1) 00:18:21.481 Could not set queue depth (nvme3n1) 00:18:21.481 Could not set queue depth (nvme4n1) 00:18:21.481 Could not set queue depth (nvme5n1) 00:18:21.481 Could not set queue depth (nvme6n1) 00:18:21.481 Could not set queue depth (nvme7n1) 00:18:21.481 Could not set queue depth (nvme8n1) 00:18:21.481 Could not set queue depth (nvme9n1) 00:18:21.481 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.481 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.481 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.481 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.481 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.481 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.481 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.481 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.481 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.481 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.481 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.481 fio-3.35 00:18:21.481 Starting 11 threads 00:18:31.460 00:18:31.460 job0: (groupid=0, jobs=1): err= 0: pid=79985: Thu Jul 11 07:11:14 2024 00:18:31.460 write: IOPS=701, BW=175MiB/s (184MB/s)(1778MiB/10136msec); 0 zone resets 00:18:31.460 slat (usec): min=25, max=24932, avg=1390.84, stdev=2566.38 00:18:31.460 clat (msec): min=14, max=300, avg=89.79, stdev=34.05 00:18:31.460 lat (msec): min=15, max=300, avg=91.18, stdev=34.45 00:18:31.460 clat percentiles (msec): 00:18:31.460 | 1.00th=[ 45], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 82], 00:18:31.460 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 89], 00:18:31.460 | 70.00th=[ 90], 80.00th=[ 91], 90.00th=[ 157], 95.00th=[ 167], 00:18:31.460 | 99.00th=[ 180], 99.50th=[ 205], 99.90th=[ 279], 99.95th=[ 292], 00:18:31.460 | 99.99th=[ 300] 00:18:31.460 bw ( KiB/s): min=94720, max=343552, per=14.57%, avg=180400.70, stdev=64444.84, samples=20 00:18:31.460 iops : min= 370, max= 1342, avg=704.65, stdev=251.75, samples=20 00:18:31.460 lat (msec) : 20=0.06%, 50=17.17%, 100=70.41%, 250=12.11%, 500=0.25% 00:18:31.460 cpu : usr=1.92%, sys=1.82%, ctx=8471, majf=0, minf=1 00:18:31.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:31.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:31.460 issued rwts: total=0,7111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.460 job1: (groupid=0, jobs=1): err= 0: pid=79986: Thu Jul 11 07:11:14 2024 00:18:31.460 write: IOPS=612, BW=153MiB/s (161MB/s)(1539MiB/10046msec); 0 zone resets 00:18:31.460 slat (usec): min=20, max=18079, avg=1583.95, stdev=3228.60 00:18:31.460 clat (usec): min=1943, max=253499, avg=102811.54, stdev=55185.55 00:18:31.460 lat (msec): min=2, max=255, avg=104.40, stdev=56.00 00:18:31.460 clat percentiles (msec): 00:18:31.460 | 1.00th=[ 11], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 50], 00:18:31.460 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 133], 60.00th=[ 140], 00:18:31.460 | 70.00th=[ 142], 80.00th=[ 144], 90.00th=[ 148], 95.00th=[ 213], 00:18:31.460 | 99.00th=[ 228], 99.50th=[ 230], 99.90th=[ 245], 99.95th=[ 249], 00:18:31.460 | 99.99th=[ 253] 00:18:31.460 bw ( KiB/s): min=73728, max=328192, per=12.59%, avg=155887.10, stdev=87620.38, samples=20 00:18:31.460 iops : min= 288, max= 1282, avg=608.90, stdev=342.23, samples=20 00:18:31.460 lat (msec) : 2=0.02%, 4=0.13%, 10=0.81%, 20=1.25%, 50=20.69% 00:18:31.460 lat (msec) : 100=23.51%, 250=53.56%, 500=0.03% 00:18:31.460 cpu : usr=1.06%, sys=1.53%, ctx=8412, majf=0, minf=1 00:18:31.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:31.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:31.460 issued rwts: total=0,6154,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.460 job2: (groupid=0, jobs=1): err= 0: pid=79998: Thu Jul 11 07:11:14 2024 00:18:31.460 write: IOPS=274, BW=68.7MiB/s (72.1MB/s)(700MiB/10178msec); 0 zone resets 00:18:31.460 slat (usec): min=20, max=54938, avg=3486.44, stdev=6599.37 00:18:31.460 clat (msec): min=21, max=398, avg=229.19, stdev=40.44 00:18:31.460 lat (msec): min=21, max=398, avg=232.67, stdev=40.70 00:18:31.460 clat percentiles (msec): 00:18:31.460 | 1.00th=[ 64], 5.00th=[ 161], 10.00th=[ 182], 20.00th=[ 213], 00:18:31.460 | 30.00th=[ 224], 40.00th=[ 226], 50.00th=[ 230], 60.00th=[ 245], 00:18:31.460 | 70.00th=[ 253], 80.00th=[ 257], 90.00th=[ 264], 95.00th=[ 271], 00:18:31.460 | 99.00th=[ 300], 99.50th=[ 355], 99.90th=[ 384], 99.95th=[ 401], 00:18:31.460 | 99.99th=[ 401] 00:18:31.460 bw ( KiB/s): min=61440, max=95232, per=5.65%, avg=70002.05, stdev=8873.20, samples=20 00:18:31.460 iops : min= 240, max= 372, avg=273.40, stdev=34.66, samples=20 00:18:31.460 lat (msec) : 50=0.71%, 100=1.57%, 250=63.44%, 500=34.27% 00:18:31.460 cpu : usr=0.73%, sys=0.80%, ctx=2971, majf=0, minf=1 00:18:31.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:18:31.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:31.460 issued rwts: total=0,2798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.460 job3: (groupid=0, jobs=1): err= 0: pid=79999: Thu Jul 11 07:11:14 2024 00:18:31.460 write: IOPS=402, BW=101MiB/s (106MB/s)(1026MiB/10182msec); 0 zone resets 00:18:31.460 slat (usec): min=18, max=27151, avg=2370.17, stdev=4387.22 00:18:31.460 clat (usec): min=941, max=397868, avg=156393.00, stdev=49659.80 00:18:31.460 lat (usec): min=1003, max=397902, avg=158763.17, stdev=50186.27 00:18:31.460 clat percentiles (msec): 00:18:31.460 | 1.00th=[ 5], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 136], 00:18:31.460 | 30.00th=[ 140], 40.00th=[ 142], 50.00th=[ 142], 60.00th=[ 144], 00:18:31.460 | 70.00th=[ 148], 80.00th=[ 213], 90.00th=[ 228], 95.00th=[ 232], 00:18:31.460 | 99.00th=[ 259], 99.50th=[ 326], 99.90th=[ 384], 99.95th=[ 384], 00:18:31.460 | 99.99th=[ 397] 00:18:31.460 bw ( KiB/s): min=69632, max=156672, per=8.35%, avg=103389.60, stdev=23864.60, samples=20 00:18:31.460 iops : min= 272, max= 612, avg=403.80, stdev=93.25, samples=20 00:18:31.460 lat (usec) : 1000=0.07% 00:18:31.460 lat (msec) : 2=0.20%, 4=0.49%, 10=2.17%, 20=0.41%, 50=0.56% 00:18:31.460 lat (msec) : 100=0.63%, 250=93.71%, 500=1.76% 00:18:31.460 cpu : usr=0.69%, sys=1.29%, ctx=5008, majf=0, minf=1 00:18:31.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:31.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:31.460 issued rwts: total=0,4102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.460 job4: (groupid=0, jobs=1): err= 0: pid=80000: Thu Jul 11 07:11:14 2024 00:18:31.460 write: IOPS=622, BW=156MiB/s (163MB/s)(1584MiB/10181msec); 0 zone resets 00:18:31.460 slat (usec): min=18, max=18726, avg=1555.03, stdev=3194.62 00:18:31.460 clat (usec): min=1775, max=402876, avg=101212.63, stdev=58187.18 00:18:31.460 lat (usec): min=1858, max=402941, avg=102767.66, stdev=58987.73 00:18:31.460 clat percentiles (msec): 00:18:31.460 | 1.00th=[ 20], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 54], 00:18:31.460 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 89], 00:18:31.460 | 70.00th=[ 90], 80.00th=[ 92], 90.00th=[ 224], 95.00th=[ 228], 00:18:31.460 | 99.00th=[ 255], 99.50th=[ 288], 99.90th=[ 376], 99.95th=[ 388], 00:18:31.460 | 99.99th=[ 405] 00:18:31.460 bw ( KiB/s): min=69632, max=323072, per=12.97%, avg=160607.00, stdev=76151.82, samples=20 00:18:31.460 iops : min= 272, max= 1262, avg=627.35, stdev=297.49, samples=20 00:18:31.460 lat (msec) : 2=0.02%, 4=0.09%, 10=0.30%, 20=0.63%, 50=6.82% 00:18:31.460 lat (msec) : 100=76.12%, 250=14.85%, 500=1.17% 00:18:31.460 cpu : usr=1.56%, sys=1.73%, ctx=7672, majf=0, minf=1 00:18:31.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:31.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:31.460 issued rwts: total=0,6337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.460 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.460 job5: (groupid=0, jobs=1): err= 0: pid=80001: Thu Jul 11 07:11:14 2024 00:18:31.460 write: IOPS=305, BW=76.4MiB/s (80.1MB/s)(774MiB/10127msec); 0 zone resets 00:18:31.460 slat (usec): min=19, max=56566, avg=3224.69, stdev=6017.06 00:18:31.460 clat (msec): min=20, max=291, avg=206.05, stdev=45.99 00:18:31.460 lat (msec): min=20, max=291, avg=209.27, stdev=46.33 00:18:31.460 clat percentiles (msec): 00:18:31.460 | 1.00th=[ 92], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:18:31.461 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 213], 60.00th=[ 234], 00:18:31.461 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 259], 95.00th=[ 266], 00:18:31.461 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 284], 99.95th=[ 292], 00:18:31.461 | 99.99th=[ 292] 00:18:31.461 bw ( KiB/s): min=61440, max=106496, per=6.27%, avg=77627.95, stdev=15940.64, samples=20 00:18:31.461 iops : min= 240, max= 416, avg=303.20, stdev=62.26, samples=20 00:18:31.461 lat (msec) : 50=0.36%, 100=0.65%, 250=73.48%, 500=25.52% 00:18:31.461 cpu : usr=0.86%, sys=1.01%, ctx=2056, majf=0, minf=1 00:18:31.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:31.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:31.461 issued rwts: total=0,3096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.461 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.461 job6: (groupid=0, jobs=1): err= 0: pid=80002: Thu Jul 11 07:11:14 2024 00:18:31.461 write: IOPS=313, BW=78.4MiB/s (82.2MB/s)(798MiB/10178msec); 0 zone resets 00:18:31.461 slat (usec): min=17, max=36813, avg=3055.20, stdev=5672.30 00:18:31.461 clat (msec): min=5, max=396, avg=200.97, stdev=50.07 00:18:31.461 lat (msec): min=5, max=396, avg=204.03, stdev=50.59 00:18:31.461 clat percentiles (msec): 00:18:31.461 | 1.00th=[ 87], 5.00th=[ 99], 10.00th=[ 109], 20.00th=[ 157], 00:18:31.461 | 30.00th=[ 203], 40.00th=[ 213], 50.00th=[ 220], 60.00th=[ 224], 00:18:31.461 | 70.00th=[ 228], 80.00th=[ 232], 90.00th=[ 245], 95.00th=[ 259], 00:18:31.461 | 99.00th=[ 284], 99.50th=[ 338], 99.90th=[ 384], 99.95th=[ 397], 00:18:31.461 | 99.99th=[ 397] 00:18:31.461 bw ( KiB/s): min=61440, max=147968, per=6.46%, avg=80059.20, stdev=19804.74, samples=20 00:18:31.461 iops : min= 240, max= 578, avg=312.70, stdev=77.36, samples=20 00:18:31.461 lat (msec) : 10=0.06%, 50=0.38%, 100=5.67%, 250=86.15%, 500=7.74% 00:18:31.461 cpu : usr=0.78%, sys=0.85%, ctx=3670, majf=0, minf=1 00:18:31.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:31.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:31.461 issued rwts: total=0,3191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.461 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.461 job7: (groupid=0, jobs=1): err= 0: pid=80003: Thu Jul 11 07:11:14 2024 00:18:31.461 write: IOPS=314, BW=78.5MiB/s (82.3MB/s)(796MiB/10142msec); 0 zone resets 00:18:31.461 slat (usec): min=26, max=40428, avg=3135.97, stdev=5662.74 00:18:31.461 clat (msec): min=4, max=303, avg=200.54, stdev=41.25 00:18:31.461 lat (msec): min=4, max=303, avg=203.68, stdev=41.52 00:18:31.461 clat percentiles (msec): 00:18:31.461 | 1.00th=[ 74], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:18:31.461 | 30.00th=[ 167], 40.00th=[ 178], 50.00th=[ 213], 60.00th=[ 226], 00:18:31.461 | 70.00th=[ 232], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 253], 00:18:31.461 | 99.00th=[ 264], 99.50th=[ 266], 99.90th=[ 292], 99.95th=[ 305], 00:18:31.461 | 99.99th=[ 305] 00:18:31.461 bw ( KiB/s): min=63488, max=104448, per=6.45%, avg=79903.95, stdev=14417.36, samples=20 00:18:31.461 iops : min= 248, max= 408, avg=312.10, stdev=56.29, samples=20 00:18:31.461 lat (msec) : 10=0.31%, 20=0.06%, 50=0.25%, 100=0.63%, 250=91.37% 00:18:31.461 lat (msec) : 500=7.38% 00:18:31.461 cpu : usr=1.00%, sys=0.79%, ctx=4217, majf=0, minf=1 00:18:31.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:31.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:31.461 issued rwts: total=0,3185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.461 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.461 job8: (groupid=0, jobs=1): err= 0: pid=80004: Thu Jul 11 07:11:14 2024 00:18:31.461 write: IOPS=387, BW=97.0MiB/s (102MB/s)(987MiB/10178msec); 0 zone resets 00:18:31.461 slat (usec): min=19, max=19118, avg=2529.20, stdev=4501.41 00:18:31.461 clat (msec): min=21, max=393, avg=162.35, stdev=40.99 00:18:31.461 lat (msec): min=21, max=393, avg=164.88, stdev=41.35 00:18:31.461 clat percentiles (msec): 00:18:31.461 | 1.00th=[ 109], 5.00th=[ 133], 10.00th=[ 134], 20.00th=[ 140], 00:18:31.461 | 30.00th=[ 142], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 144], 00:18:31.461 | 70.00th=[ 150], 80.00th=[ 215], 90.00th=[ 226], 95.00th=[ 228], 00:18:31.461 | 99.00th=[ 259], 99.50th=[ 321], 99.90th=[ 380], 99.95th=[ 393], 00:18:31.461 | 99.99th=[ 393] 00:18:31.461 bw ( KiB/s): min=69493, max=118784, per=8.03%, avg=99467.45, stdev=20874.53, samples=20 00:18:31.461 iops : min= 271, max= 464, avg=388.50, stdev=81.61, samples=20 00:18:31.461 lat (msec) : 50=0.41%, 100=0.51%, 250=97.27%, 500=1.82% 00:18:31.461 cpu : usr=0.75%, sys=1.12%, ctx=4802, majf=0, minf=1 00:18:31.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:31.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:31.461 issued rwts: total=0,3949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.461 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.461 job9: (groupid=0, jobs=1): err= 0: pid=80005: Thu Jul 11 07:11:14 2024 00:18:31.461 write: IOPS=321, BW=80.5MiB/s (84.4MB/s)(816MiB/10134msec); 0 zone resets 00:18:31.461 slat (usec): min=28, max=32868, avg=3059.64, stdev=5385.66 00:18:31.461 clat (msec): min=18, max=301, avg=195.68, stdev=35.78 00:18:31.461 lat (msec): min=18, max=301, avg=198.74, stdev=35.98 00:18:31.461 clat percentiles (msec): 00:18:31.461 | 1.00th=[ 104], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:18:31.461 | 30.00th=[ 167], 40.00th=[ 171], 50.00th=[ 209], 60.00th=[ 218], 00:18:31.461 | 70.00th=[ 222], 80.00th=[ 228], 90.00th=[ 234], 95.00th=[ 247], 00:18:31.461 | 99.00th=[ 253], 99.50th=[ 259], 99.90th=[ 292], 99.95th=[ 300], 00:18:31.461 | 99.99th=[ 300] 00:18:31.461 bw ( KiB/s): min=65536, max=104448, per=6.61%, avg=81886.55, stdev=12891.58, samples=20 00:18:31.461 iops : min= 256, max= 408, avg=319.80, stdev=50.32, samples=20 00:18:31.461 lat (msec) : 20=0.12%, 50=0.25%, 100=0.61%, 250=96.44%, 500=2.58% 00:18:31.461 cpu : usr=0.90%, sys=1.08%, ctx=3727, majf=0, minf=1 00:18:31.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:31.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:31.461 issued rwts: total=0,3262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.461 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.461 job10: (groupid=0, jobs=1): err= 0: pid=80006: Thu Jul 11 07:11:14 2024 00:18:31.461 write: IOPS=604, BW=151MiB/s (159MB/s)(1518MiB/10043msec); 0 zone resets 00:18:31.461 slat (usec): min=19, max=27147, avg=1592.90, stdev=3689.93 00:18:31.461 clat (msec): min=3, max=254, avg=104.17, stdev=76.91 00:18:31.461 lat (msec): min=4, max=254, avg=105.77, stdev=78.05 00:18:31.461 clat percentiles (msec): 00:18:31.461 | 1.00th=[ 26], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 46], 00:18:31.461 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 88], 00:18:31.461 | 70.00th=[ 105], 80.00th=[ 218], 90.00th=[ 228], 95.00th=[ 234], 00:18:31.461 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 253], 99.95th=[ 255], 00:18:31.461 | 99.99th=[ 255] 00:18:31.461 bw ( KiB/s): min=65536, max=355328, per=12.42%, avg=153812.85, stdev=113271.90, samples=20 00:18:31.461 iops : min= 256, max= 1388, avg=600.80, stdev=442.44, samples=20 00:18:31.461 lat (msec) : 4=0.02%, 10=0.20%, 20=0.53%, 50=49.73%, 100=18.56% 00:18:31.461 lat (msec) : 250=30.18%, 500=0.79% 00:18:31.461 cpu : usr=1.48%, sys=1.61%, ctx=7633, majf=0, minf=1 00:18:31.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:31.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:31.461 issued rwts: total=0,6073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.461 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.461 00:18:31.461 Run status group 0 (all jobs): 00:18:31.461 WRITE: bw=1209MiB/s (1268MB/s), 68.7MiB/s-175MiB/s (72.1MB/s-184MB/s), io=12.0GiB (12.9GB), run=10043-10182msec 00:18:31.461 00:18:31.461 Disk stats (read/write): 00:18:31.461 nvme0n1: ios=49/14085, merge=0/0, ticks=44/1210878, in_queue=1210922, util=97.77% 00:18:31.461 nvme10n1: ios=49/12159, merge=0/0, ticks=46/1218574, in_queue=1218620, util=97.94% 00:18:31.461 nvme1n1: ios=48/5461, merge=0/0, ticks=36/1207203, in_queue=1207239, util=98.03% 00:18:31.461 nvme2n1: ios=13/8068, merge=0/0, ticks=26/1208584, in_queue=1208610, util=97.96% 00:18:31.461 nvme3n1: ios=9/12544, merge=0/0, ticks=36/1207471, in_queue=1207507, util=98.09% 00:18:31.461 nvme4n1: ios=0/6048, merge=0/0, ticks=0/1208653, in_queue=1208653, util=98.15% 00:18:31.461 nvme5n1: ios=0/6244, merge=0/0, ticks=0/1208072, in_queue=1208072, util=98.31% 00:18:31.461 nvme6n1: ios=0/6238, merge=0/0, ticks=0/1211982, in_queue=1211982, util=98.53% 00:18:31.461 nvme7n1: ios=0/7759, merge=0/0, ticks=0/1207324, in_queue=1207324, util=98.66% 00:18:31.461 nvme8n1: ios=0/6389, merge=0/0, ticks=0/1210958, in_queue=1210958, util=98.82% 00:18:31.461 nvme9n1: ios=0/11961, merge=0/0, ticks=0/1218816, in_queue=1218816, util=98.87% 00:18:31.461 07:11:14 -- target/multiconnection.sh@36 -- # sync 00:18:31.461 07:11:14 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:31.461 07:11:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.461 07:11:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:31.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.461 07:11:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:31.461 07:11:14 -- common/autotest_common.sh@1198 -- # local i=0 00:18:31.461 07:11:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:31.461 07:11:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:18:31.461 07:11:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:31.461 07:11:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:18:31.461 07:11:14 -- common/autotest_common.sh@1210 -- # return 0 00:18:31.461 07:11:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:31.461 07:11:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.461 07:11:14 -- common/autotest_common.sh@10 -- # set +x 00:18:31.461 07:11:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.461 07:11:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.461 07:11:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:31.461 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:31.461 07:11:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:31.461 07:11:14 -- common/autotest_common.sh@1198 -- # local i=0 00:18:31.461 07:11:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:31.461 07:11:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:18:31.461 07:11:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:31.461 07:11:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:18:31.461 07:11:14 -- common/autotest_common.sh@1210 -- # return 0 00:18:31.461 07:11:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:31.461 07:11:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.462 07:11:14 -- common/autotest_common.sh@10 -- # set +x 00:18:31.462 07:11:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.462 07:11:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.462 07:11:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:31.462 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:31.462 07:11:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:31.462 07:11:14 -- common/autotest_common.sh@1198 -- # local i=0 00:18:31.462 07:11:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:31.462 07:11:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:18:31.462 07:11:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:31.462 07:11:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:18:31.462 07:11:14 -- common/autotest_common.sh@1210 -- # return 0 00:18:31.462 07:11:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:31.462 07:11:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.462 07:11:14 -- common/autotest_common.sh@10 -- # set +x 00:18:31.462 07:11:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.462 07:11:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.462 07:11:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:31.462 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:31.462 07:11:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:31.462 07:11:14 -- common/autotest_common.sh@1198 -- # local i=0 00:18:31.462 07:11:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:18:31.462 07:11:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:31.462 07:11:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:31.462 07:11:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:18:31.462 07:11:14 -- common/autotest_common.sh@1210 -- # return 0 00:18:31.462 07:11:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:31.462 07:11:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.462 07:11:14 -- common/autotest_common.sh@10 -- # set +x 00:18:31.462 07:11:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.462 07:11:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.462 07:11:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:31.462 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:31.462 07:11:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:31.462 07:11:14 -- common/autotest_common.sh@1198 -- # local i=0 00:18:31.462 07:11:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:31.462 07:11:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:18:31.462 07:11:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:31.462 07:11:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:18:31.462 07:11:14 -- common/autotest_common.sh@1210 -- # return 0 00:18:31.462 07:11:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:31.462 07:11:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.462 07:11:14 -- common/autotest_common.sh@10 -- # set +x 00:18:31.462 07:11:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.462 07:11:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.462 07:11:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:31.462 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:31.462 07:11:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:31.462 07:11:14 -- common/autotest_common.sh@1198 -- # local i=0 00:18:31.462 07:11:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:18:31.462 07:11:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:31.462 07:11:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:31.462 07:11:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:18:31.462 07:11:14 -- common/autotest_common.sh@1210 -- # return 0 00:18:31.462 07:11:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:31.462 07:11:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.462 07:11:14 -- common/autotest_common.sh@10 -- # set +x 00:18:31.462 07:11:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.462 07:11:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.462 07:11:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:31.462 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:31.462 07:11:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:31.462 07:11:15 -- common/autotest_common.sh@1198 -- # local i=0 00:18:31.462 07:11:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:31.462 07:11:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:18:31.462 07:11:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:18:31.462 07:11:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:31.462 07:11:15 -- common/autotest_common.sh@1210 -- # return 0 00:18:31.462 07:11:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:31.462 07:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.462 07:11:15 -- common/autotest_common.sh@10 -- # set +x 00:18:31.462 07:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.462 07:11:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.462 07:11:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:31.462 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:31.462 07:11:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:31.462 07:11:15 -- common/autotest_common.sh@1198 -- # local i=0 00:18:31.462 07:11:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:31.462 07:11:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:18:31.462 07:11:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:31.462 07:11:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:18:31.462 07:11:15 -- common/autotest_common.sh@1210 -- # return 0 00:18:31.462 07:11:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:31.462 07:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.462 07:11:15 -- common/autotest_common.sh@10 -- # set +x 00:18:31.462 07:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.462 07:11:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.462 07:11:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:31.462 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:31.462 07:11:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:31.462 07:11:15 -- common/autotest_common.sh@1198 -- # local i=0 00:18:31.462 07:11:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:18:31.462 07:11:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:31.462 07:11:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:31.462 07:11:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:18:31.462 07:11:15 -- common/autotest_common.sh@1210 -- # return 0 00:18:31.462 07:11:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:31.462 07:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.462 07:11:15 -- common/autotest_common.sh@10 -- # set +x 00:18:31.462 07:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.462 07:11:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.462 07:11:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:31.462 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:31.462 07:11:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:31.462 07:11:15 -- common/autotest_common.sh@1198 -- # local i=0 00:18:31.462 07:11:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:18:31.721 07:11:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:31.721 07:11:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:31.721 07:11:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:18:31.721 07:11:15 -- common/autotest_common.sh@1210 -- # return 0 00:18:31.721 07:11:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:31.721 07:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.721 07:11:15 -- common/autotest_common.sh@10 -- # set +x 00:18:31.721 07:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.721 07:11:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.721 07:11:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:31.721 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:31.721 07:11:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:31.721 07:11:15 -- common/autotest_common.sh@1198 -- # local i=0 00:18:31.721 07:11:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:18:31.721 07:11:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:31.721 07:11:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:31.721 07:11:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:18:31.721 07:11:15 -- common/autotest_common.sh@1210 -- # return 0 00:18:31.721 07:11:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:31.721 07:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.721 07:11:15 -- common/autotest_common.sh@10 -- # set +x 00:18:31.721 07:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.721 07:11:15 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:31.721 07:11:15 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:31.721 07:11:15 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:31.721 07:11:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:31.721 07:11:15 -- nvmf/common.sh@116 -- # sync 00:18:31.721 07:11:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:31.721 07:11:15 -- nvmf/common.sh@119 -- # set +e 00:18:31.721 07:11:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:31.721 07:11:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:31.721 rmmod nvme_tcp 00:18:31.721 rmmod nvme_fabrics 00:18:31.980 rmmod nvme_keyring 00:18:31.980 07:11:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:31.980 07:11:15 -- nvmf/common.sh@123 -- # set -e 00:18:31.980 07:11:15 -- nvmf/common.sh@124 -- # return 0 00:18:31.980 07:11:15 -- nvmf/common.sh@477 -- # '[' -n 79302 ']' 00:18:31.980 07:11:15 -- nvmf/common.sh@478 -- # killprocess 79302 00:18:31.980 07:11:15 -- common/autotest_common.sh@926 -- # '[' -z 79302 ']' 00:18:31.980 07:11:15 -- common/autotest_common.sh@930 -- # kill -0 79302 00:18:31.980 07:11:15 -- common/autotest_common.sh@931 -- # uname 00:18:31.980 07:11:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:31.980 07:11:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79302 00:18:31.980 07:11:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:31.980 07:11:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:31.980 killing process with pid 79302 00:18:31.980 07:11:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79302' 00:18:31.980 07:11:15 -- common/autotest_common.sh@945 -- # kill 79302 00:18:31.980 07:11:15 -- common/autotest_common.sh@950 -- # wait 79302 00:18:32.548 07:11:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:32.548 07:11:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:32.548 07:11:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:32.548 07:11:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:32.548 07:11:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:32.548 07:11:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.548 07:11:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.548 07:11:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.548 07:11:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:32.548 00:18:32.548 real 0m50.001s 00:18:32.548 user 2m51.442s 00:18:32.548 sys 0m22.516s 00:18:32.548 07:11:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:32.548 07:11:16 -- common/autotest_common.sh@10 -- # set +x 00:18:32.548 ************************************ 00:18:32.548 END TEST nvmf_multiconnection 00:18:32.548 ************************************ 00:18:32.548 07:11:16 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:32.548 07:11:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:32.548 07:11:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:32.548 07:11:16 -- common/autotest_common.sh@10 -- # set +x 00:18:32.548 ************************************ 00:18:32.548 START TEST nvmf_initiator_timeout 00:18:32.548 ************************************ 00:18:32.548 07:11:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:32.548 * Looking for test storage... 00:18:32.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:32.548 07:11:16 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:32.548 07:11:16 -- nvmf/common.sh@7 -- # uname -s 00:18:32.548 07:11:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.548 07:11:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.548 07:11:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.548 07:11:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.548 07:11:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.548 07:11:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.548 07:11:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.548 07:11:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.548 07:11:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.548 07:11:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.548 07:11:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:18:32.548 07:11:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:18:32.548 07:11:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.548 07:11:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.548 07:11:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:32.548 07:11:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:32.548 07:11:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.548 07:11:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.548 07:11:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.548 07:11:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.548 07:11:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.548 07:11:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.548 07:11:16 -- paths/export.sh@5 -- # export PATH 00:18:32.548 07:11:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.548 07:11:16 -- nvmf/common.sh@46 -- # : 0 00:18:32.548 07:11:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:32.548 07:11:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:32.548 07:11:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:32.548 07:11:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.548 07:11:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.548 07:11:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:32.548 07:11:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:32.548 07:11:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:32.548 07:11:16 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:32.548 07:11:16 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:32.548 07:11:16 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:32.548 07:11:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:32.548 07:11:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.549 07:11:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:32.549 07:11:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:32.549 07:11:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:32.549 07:11:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.549 07:11:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.549 07:11:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.549 07:11:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:32.549 07:11:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:32.549 07:11:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:32.549 07:11:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:32.549 07:11:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:32.549 07:11:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:32.549 07:11:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.549 07:11:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.549 07:11:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:32.549 07:11:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:32.549 07:11:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:32.549 07:11:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:32.549 07:11:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:32.549 07:11:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.549 07:11:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:32.549 07:11:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:32.549 07:11:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:32.549 07:11:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:32.549 07:11:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:32.549 07:11:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:32.549 Cannot find device "nvmf_tgt_br" 00:18:32.549 07:11:16 -- nvmf/common.sh@154 -- # true 00:18:32.549 07:11:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:32.549 Cannot find device "nvmf_tgt_br2" 00:18:32.549 07:11:16 -- nvmf/common.sh@155 -- # true 00:18:32.549 07:11:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:32.549 07:11:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:32.549 Cannot find device "nvmf_tgt_br" 00:18:32.549 07:11:16 -- nvmf/common.sh@157 -- # true 00:18:32.549 07:11:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:32.549 Cannot find device "nvmf_tgt_br2" 00:18:32.549 07:11:16 -- nvmf/common.sh@158 -- # true 00:18:32.549 07:11:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:32.807 07:11:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:32.807 07:11:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:32.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:32.807 07:11:16 -- nvmf/common.sh@161 -- # true 00:18:32.807 07:11:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:32.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:32.807 07:11:16 -- nvmf/common.sh@162 -- # true 00:18:32.807 07:11:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:32.807 07:11:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:32.807 07:11:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:32.807 07:11:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:32.807 07:11:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:32.807 07:11:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:32.807 07:11:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:32.807 07:11:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:32.807 07:11:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:32.807 07:11:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:32.807 07:11:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:32.807 07:11:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:32.807 07:11:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:32.807 07:11:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:32.807 07:11:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:32.807 07:11:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:32.807 07:11:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:32.807 07:11:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:32.807 07:11:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:32.807 07:11:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:32.807 07:11:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:32.807 07:11:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:32.807 07:11:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:32.807 07:11:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:32.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:18:32.808 00:18:32.808 --- 10.0.0.2 ping statistics --- 00:18:32.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.808 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:32.808 07:11:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:32.808 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:32.808 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:18:32.808 00:18:32.808 --- 10.0.0.3 ping statistics --- 00:18:32.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.808 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:32.808 07:11:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:32.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:32.808 00:18:32.808 --- 10.0.0.1 ping statistics --- 00:18:32.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.808 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:32.808 07:11:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.808 07:11:16 -- nvmf/common.sh@421 -- # return 0 00:18:32.808 07:11:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:32.808 07:11:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.808 07:11:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:32.808 07:11:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:32.808 07:11:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.808 07:11:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:32.808 07:11:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:33.066 07:11:16 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:33.066 07:11:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:33.066 07:11:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:33.066 07:11:16 -- common/autotest_common.sh@10 -- # set +x 00:18:33.066 07:11:16 -- nvmf/common.sh@469 -- # nvmfpid=80381 00:18:33.066 07:11:16 -- nvmf/common.sh@470 -- # waitforlisten 80381 00:18:33.066 07:11:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:33.066 07:11:16 -- common/autotest_common.sh@819 -- # '[' -z 80381 ']' 00:18:33.066 07:11:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.066 07:11:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:33.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.066 07:11:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.066 07:11:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:33.066 07:11:16 -- common/autotest_common.sh@10 -- # set +x 00:18:33.066 [2024-07-11 07:11:16.932591] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:33.067 [2024-07-11 07:11:16.932695] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.067 [2024-07-11 07:11:17.071921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:33.324 [2024-07-11 07:11:17.145972] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:33.324 [2024-07-11 07:11:17.146127] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.324 [2024-07-11 07:11:17.146140] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.324 [2024-07-11 07:11:17.146148] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.324 [2024-07-11 07:11:17.146355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.324 [2024-07-11 07:11:17.146646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.324 [2024-07-11 07:11:17.146694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.324 [2024-07-11 07:11:17.146696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.890 07:11:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:33.890 07:11:17 -- common/autotest_common.sh@852 -- # return 0 00:18:33.890 07:11:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:33.890 07:11:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:33.891 07:11:17 -- common/autotest_common.sh@10 -- # set +x 00:18:33.891 07:11:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.891 07:11:17 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:33.891 07:11:17 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:33.891 07:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.891 07:11:17 -- common/autotest_common.sh@10 -- # set +x 00:18:33.891 Malloc0 00:18:33.891 07:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.891 07:11:17 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:33.891 07:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.891 07:11:17 -- common/autotest_common.sh@10 -- # set +x 00:18:34.149 Delay0 00:18:34.149 07:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.149 07:11:17 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:34.149 07:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.149 07:11:17 -- common/autotest_common.sh@10 -- # set +x 00:18:34.149 [2024-07-11 07:11:17.955966] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.149 07:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.149 07:11:17 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:34.149 07:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.149 07:11:17 -- common/autotest_common.sh@10 -- # set +x 00:18:34.149 07:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.149 07:11:17 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:34.149 07:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.149 07:11:17 -- common/autotest_common.sh@10 -- # set +x 00:18:34.149 07:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.149 07:11:17 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.149 07:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.149 07:11:17 -- common/autotest_common.sh@10 -- # set +x 00:18:34.149 [2024-07-11 07:11:17.984239] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.149 07:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.149 07:11:17 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:34.149 07:11:18 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:34.149 07:11:18 -- common/autotest_common.sh@1177 -- # local i=0 00:18:34.149 07:11:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.149 07:11:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:34.149 07:11:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:36.676 07:11:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:36.676 07:11:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:36.676 07:11:20 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:36.676 07:11:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:36.676 07:11:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.676 07:11:20 -- common/autotest_common.sh@1187 -- # return 0 00:18:36.676 07:11:20 -- target/initiator_timeout.sh@35 -- # fio_pid=80462 00:18:36.676 07:11:20 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:36.676 07:11:20 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:36.676 [global] 00:18:36.676 thread=1 00:18:36.676 invalidate=1 00:18:36.676 rw=write 00:18:36.676 time_based=1 00:18:36.676 runtime=60 00:18:36.676 ioengine=libaio 00:18:36.676 direct=1 00:18:36.676 bs=4096 00:18:36.676 iodepth=1 00:18:36.676 norandommap=0 00:18:36.676 numjobs=1 00:18:36.676 00:18:36.676 verify_dump=1 00:18:36.676 verify_backlog=512 00:18:36.676 verify_state_save=0 00:18:36.676 do_verify=1 00:18:36.676 verify=crc32c-intel 00:18:36.676 [job0] 00:18:36.676 filename=/dev/nvme0n1 00:18:36.676 Could not set queue depth (nvme0n1) 00:18:36.676 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:36.676 fio-3.35 00:18:36.676 Starting 1 thread 00:18:39.222 07:11:23 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:39.222 07:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.222 07:11:23 -- common/autotest_common.sh@10 -- # set +x 00:18:39.222 true 00:18:39.222 07:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.222 07:11:23 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:39.222 07:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.222 07:11:23 -- common/autotest_common.sh@10 -- # set +x 00:18:39.222 true 00:18:39.222 07:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.222 07:11:23 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:39.222 07:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.222 07:11:23 -- common/autotest_common.sh@10 -- # set +x 00:18:39.222 true 00:18:39.222 07:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.222 07:11:23 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:39.222 07:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.222 07:11:23 -- common/autotest_common.sh@10 -- # set +x 00:18:39.222 true 00:18:39.222 07:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.222 07:11:23 -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:42.494 07:11:26 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:42.494 07:11:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.494 07:11:26 -- common/autotest_common.sh@10 -- # set +x 00:18:42.494 true 00:18:42.494 07:11:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.494 07:11:26 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:42.494 07:11:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.494 07:11:26 -- common/autotest_common.sh@10 -- # set +x 00:18:42.494 true 00:18:42.494 07:11:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.494 07:11:26 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:42.495 07:11:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.495 07:11:26 -- common/autotest_common.sh@10 -- # set +x 00:18:42.495 true 00:18:42.495 07:11:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.495 07:11:26 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:42.495 07:11:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.495 07:11:26 -- common/autotest_common.sh@10 -- # set +x 00:18:42.495 true 00:18:42.495 07:11:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.495 07:11:26 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:42.495 07:11:26 -- target/initiator_timeout.sh@54 -- # wait 80462 00:19:38.731 00:19:38.731 job0: (groupid=0, jobs=1): err= 0: pid=80483: Thu Jul 11 07:12:20 2024 00:19:38.731 read: IOPS=836, BW=3345KiB/s (3425kB/s)(196MiB/60000msec) 00:19:38.731 slat (usec): min=12, max=13104, avg=15.42, stdev=68.10 00:19:38.731 clat (usec): min=153, max=40392k, avg=1001.51, stdev=180319.32 00:19:38.731 lat (usec): min=166, max=40392k, avg=1016.93, stdev=180319.33 00:19:38.731 clat percentiles (usec): 00:19:38.731 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 184], 00:19:38.731 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:19:38.731 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 227], 00:19:38.731 | 99.00th=[ 247], 99.50th=[ 258], 99.90th=[ 297], 99.95th=[ 326], 00:19:38.731 | 99.99th=[ 652] 00:19:38.731 write: IOPS=839, BW=3357KiB/s (3437kB/s)(197MiB/60000msec); 0 zone resets 00:19:38.731 slat (usec): min=18, max=696, avg=22.01, stdev= 7.30 00:19:38.731 clat (usec): min=119, max=586, avg=153.36, stdev=14.81 00:19:38.731 lat (usec): min=140, max=919, avg=175.37, stdev=16.99 00:19:38.731 clat percentiles (usec): 00:19:38.731 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:19:38.731 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:19:38.731 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 180], 00:19:38.731 | 99.00th=[ 200], 99.50th=[ 212], 99.90th=[ 245], 99.95th=[ 262], 00:19:38.731 | 99.99th=[ 424] 00:19:38.731 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=10081.90, stdev=1585.09, samples=39 00:19:38.731 iops : min= 1024, max= 3072, avg=2520.46, stdev=396.27, samples=39 00:19:38.731 lat (usec) : 250=99.56%, 500=0.42%, 750=0.01% 00:19:38.731 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:19:38.731 cpu : usr=0.53%, sys=2.22%, ctx=100536, majf=0, minf=2 00:19:38.731 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:38.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.731 issued rwts: total=50176,50351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.731 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:38.731 00:19:38.731 Run status group 0 (all jobs): 00:19:38.731 READ: bw=3345KiB/s (3425kB/s), 3345KiB/s-3345KiB/s (3425kB/s-3425kB/s), io=196MiB (206MB), run=60000-60000msec 00:19:38.731 WRITE: bw=3357KiB/s (3437kB/s), 3357KiB/s-3357KiB/s (3437kB/s-3437kB/s), io=197MiB (206MB), run=60000-60000msec 00:19:38.731 00:19:38.731 Disk stats (read/write): 00:19:38.731 nvme0n1: ios=50163/50176, merge=0/0, ticks=10314/8283, in_queue=18597, util=99.91% 00:19:38.731 07:12:20 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:38.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:38.731 07:12:20 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:38.731 07:12:20 -- common/autotest_common.sh@1198 -- # local i=0 00:19:38.731 07:12:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:38.731 07:12:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:38.731 07:12:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:38.731 07:12:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:38.731 nvmf hotplug test: fio successful as expected 00:19:38.731 07:12:20 -- common/autotest_common.sh@1210 -- # return 0 00:19:38.731 07:12:20 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:38.731 07:12:20 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:38.731 07:12:20 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:38.731 07:12:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.731 07:12:20 -- common/autotest_common.sh@10 -- # set +x 00:19:38.731 07:12:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.731 07:12:20 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:38.731 07:12:20 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:38.731 07:12:20 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:38.731 07:12:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:38.731 07:12:20 -- nvmf/common.sh@116 -- # sync 00:19:38.731 07:12:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:38.731 07:12:20 -- nvmf/common.sh@119 -- # set +e 00:19:38.731 07:12:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:38.731 07:12:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:38.731 rmmod nvme_tcp 00:19:38.731 rmmod nvme_fabrics 00:19:38.731 rmmod nvme_keyring 00:19:38.731 07:12:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:38.731 07:12:20 -- nvmf/common.sh@123 -- # set -e 00:19:38.731 07:12:20 -- nvmf/common.sh@124 -- # return 0 00:19:38.731 07:12:20 -- nvmf/common.sh@477 -- # '[' -n 80381 ']' 00:19:38.731 07:12:20 -- nvmf/common.sh@478 -- # killprocess 80381 00:19:38.731 07:12:20 -- common/autotest_common.sh@926 -- # '[' -z 80381 ']' 00:19:38.731 07:12:20 -- common/autotest_common.sh@930 -- # kill -0 80381 00:19:38.731 07:12:20 -- common/autotest_common.sh@931 -- # uname 00:19:38.731 07:12:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:38.731 07:12:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80381 00:19:38.731 07:12:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:38.731 killing process with pid 80381 00:19:38.731 07:12:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:38.731 07:12:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80381' 00:19:38.731 07:12:20 -- common/autotest_common.sh@945 -- # kill 80381 00:19:38.731 07:12:20 -- common/autotest_common.sh@950 -- # wait 80381 00:19:38.731 07:12:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:38.731 07:12:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:38.731 07:12:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:38.731 07:12:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.731 07:12:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:38.731 07:12:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.731 07:12:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.731 07:12:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.731 07:12:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:38.731 00:19:38.731 real 1m4.492s 00:19:38.731 user 4m7.149s 00:19:38.731 sys 0m7.634s 00:19:38.731 07:12:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.731 07:12:20 -- common/autotest_common.sh@10 -- # set +x 00:19:38.731 ************************************ 00:19:38.731 END TEST nvmf_initiator_timeout 00:19:38.731 ************************************ 00:19:38.731 07:12:20 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:19:38.732 07:12:20 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:38.732 07:12:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:38.732 07:12:20 -- common/autotest_common.sh@10 -- # set +x 00:19:38.732 07:12:21 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:38.732 07:12:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:38.732 07:12:21 -- common/autotest_common.sh@10 -- # set +x 00:19:38.732 07:12:21 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:38.732 07:12:21 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:38.732 07:12:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:38.732 07:12:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:38.732 07:12:21 -- common/autotest_common.sh@10 -- # set +x 00:19:38.732 ************************************ 00:19:38.732 START TEST nvmf_multicontroller 00:19:38.732 ************************************ 00:19:38.732 07:12:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:38.732 * Looking for test storage... 00:19:38.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:38.732 07:12:21 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:38.732 07:12:21 -- nvmf/common.sh@7 -- # uname -s 00:19:38.732 07:12:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.732 07:12:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.732 07:12:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.732 07:12:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.732 07:12:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.732 07:12:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.732 07:12:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.732 07:12:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.732 07:12:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.732 07:12:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.732 07:12:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:19:38.732 07:12:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:19:38.732 07:12:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.732 07:12:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.732 07:12:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:38.732 07:12:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:38.732 07:12:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.732 07:12:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.732 07:12:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.732 07:12:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.732 07:12:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.732 07:12:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.732 07:12:21 -- paths/export.sh@5 -- # export PATH 00:19:38.732 07:12:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.732 07:12:21 -- nvmf/common.sh@46 -- # : 0 00:19:38.732 07:12:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:38.732 07:12:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:38.732 07:12:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:38.732 07:12:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.732 07:12:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.732 07:12:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:38.732 07:12:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:38.732 07:12:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:38.732 07:12:21 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:38.732 07:12:21 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:38.732 07:12:21 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:38.732 07:12:21 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:38.732 07:12:21 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.732 07:12:21 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:38.732 07:12:21 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:38.732 07:12:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:38.732 07:12:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.732 07:12:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:38.732 07:12:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:38.732 07:12:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:38.732 07:12:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.732 07:12:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.732 07:12:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.732 07:12:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:38.732 07:12:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:38.732 07:12:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:38.732 07:12:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:38.732 07:12:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:38.732 07:12:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:38.732 07:12:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.732 07:12:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.732 07:12:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:38.732 07:12:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:38.732 07:12:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:38.732 07:12:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:38.732 07:12:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:38.732 07:12:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.732 07:12:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:38.732 07:12:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:38.732 07:12:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:38.732 07:12:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:38.732 07:12:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:38.732 07:12:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:38.732 Cannot find device "nvmf_tgt_br" 00:19:38.732 07:12:21 -- nvmf/common.sh@154 -- # true 00:19:38.732 07:12:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:38.732 Cannot find device "nvmf_tgt_br2" 00:19:38.732 07:12:21 -- nvmf/common.sh@155 -- # true 00:19:38.732 07:12:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:38.732 07:12:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:38.732 Cannot find device "nvmf_tgt_br" 00:19:38.732 07:12:21 -- nvmf/common.sh@157 -- # true 00:19:38.732 07:12:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:38.732 Cannot find device "nvmf_tgt_br2" 00:19:38.732 07:12:21 -- nvmf/common.sh@158 -- # true 00:19:38.732 07:12:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:38.732 07:12:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:38.732 07:12:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:38.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.732 07:12:21 -- nvmf/common.sh@161 -- # true 00:19:38.732 07:12:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:38.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.732 07:12:21 -- nvmf/common.sh@162 -- # true 00:19:38.732 07:12:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:38.732 07:12:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:38.732 07:12:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:38.732 07:12:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:38.732 07:12:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:38.732 07:12:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:38.732 07:12:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:38.732 07:12:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:38.732 07:12:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:38.732 07:12:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:38.732 07:12:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:38.732 07:12:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:38.732 07:12:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:38.732 07:12:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:38.732 07:12:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:38.732 07:12:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:38.732 07:12:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:38.732 07:12:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:38.732 07:12:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:38.732 07:12:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:38.732 07:12:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:38.732 07:12:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:38.732 07:12:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:38.732 07:12:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:38.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:19:38.732 00:19:38.732 --- 10.0.0.2 ping statistics --- 00:19:38.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.732 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:38.732 07:12:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:38.732 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:38.732 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:19:38.732 00:19:38.732 --- 10.0.0.3 ping statistics --- 00:19:38.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.732 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:19:38.732 07:12:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:38.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:38.733 00:19:38.733 --- 10.0.0.1 ping statistics --- 00:19:38.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.733 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:38.733 07:12:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.733 07:12:21 -- nvmf/common.sh@421 -- # return 0 00:19:38.733 07:12:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:38.733 07:12:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.733 07:12:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:38.733 07:12:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:38.733 07:12:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.733 07:12:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:38.733 07:12:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:38.733 07:12:21 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:38.733 07:12:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:38.733 07:12:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:38.733 07:12:21 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 07:12:21 -- nvmf/common.sh@469 -- # nvmfpid=81309 00:19:38.733 07:12:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:38.733 07:12:21 -- nvmf/common.sh@470 -- # waitforlisten 81309 00:19:38.733 07:12:21 -- common/autotest_common.sh@819 -- # '[' -z 81309 ']' 00:19:38.733 07:12:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.733 07:12:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:38.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.733 07:12:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.733 07:12:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:38.733 07:12:21 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 [2024-07-11 07:12:21.534481] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:38.733 [2024-07-11 07:12:21.534535] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.733 [2024-07-11 07:12:21.669399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:38.733 [2024-07-11 07:12:21.762793] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:38.733 [2024-07-11 07:12:21.762979] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.733 [2024-07-11 07:12:21.762995] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.733 [2024-07-11 07:12:21.763006] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.733 [2024-07-11 07:12:21.763214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.733 [2024-07-11 07:12:21.763374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:38.733 [2024-07-11 07:12:21.763385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.733 07:12:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:38.733 07:12:22 -- common/autotest_common.sh@852 -- # return 0 00:19:38.733 07:12:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:38.733 07:12:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:38.733 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 07:12:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.733 07:12:22 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:38.733 07:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.733 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 [2024-07-11 07:12:22.505472] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.733 07:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.733 07:12:22 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:38.733 07:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.733 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 Malloc0 00:19:38.733 07:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.733 07:12:22 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:38.733 07:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.733 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 07:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.733 07:12:22 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:38.733 07:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.733 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 07:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.733 07:12:22 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:38.733 07:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.733 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 [2024-07-11 07:12:22.567750] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.733 07:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.733 07:12:22 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:38.733 07:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.733 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 [2024-07-11 07:12:22.575661] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:38.733 07:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.733 07:12:22 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:38.733 07:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.733 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 Malloc1 00:19:38.733 07:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.733 07:12:22 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:38.733 07:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.733 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 07:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.733 07:12:22 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:38.733 07:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.733 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 07:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.733 07:12:22 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:38.733 07:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.733 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 07:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.733 07:12:22 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:38.733 07:12:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.733 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 07:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.733 07:12:22 -- host/multicontroller.sh@44 -- # bdevperf_pid=81361 00:19:38.733 07:12:22 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:38.733 07:12:22 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:38.733 07:12:22 -- host/multicontroller.sh@47 -- # waitforlisten 81361 /var/tmp/bdevperf.sock 00:19:38.733 07:12:22 -- common/autotest_common.sh@819 -- # '[' -z 81361 ']' 00:19:38.733 07:12:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.733 07:12:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:38.733 07:12:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.733 07:12:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:38.733 07:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:39.668 07:12:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:39.668 07:12:23 -- common/autotest_common.sh@852 -- # return 0 00:19:39.668 07:12:23 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:39.668 07:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.668 07:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:39.668 NVMe0n1 00:19:39.668 07:12:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.668 07:12:23 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:39.668 07:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.668 07:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:39.669 07:12:23 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:39.669 07:12:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.669 1 00:19:39.669 07:12:23 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:39.669 07:12:23 -- common/autotest_common.sh@640 -- # local es=0 00:19:39.669 07:12:23 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:39.669 07:12:23 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:39.669 07:12:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:39.669 07:12:23 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:39.669 07:12:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:39.669 07:12:23 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:39.669 07:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.669 07:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:39.669 2024/07/11 07:12:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:39.669 request: 00:19:39.669 { 00:19:39.669 "method": "bdev_nvme_attach_controller", 00:19:39.669 "params": { 00:19:39.669 "name": "NVMe0", 00:19:39.669 "trtype": "tcp", 00:19:39.669 "traddr": "10.0.0.2", 00:19:39.669 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:39.669 "hostaddr": "10.0.0.2", 00:19:39.669 "hostsvcid": "60000", 00:19:39.669 "adrfam": "ipv4", 00:19:39.669 "trsvcid": "4420", 00:19:39.669 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:39.669 } 00:19:39.669 } 00:19:39.669 Got JSON-RPC error response 00:19:39.669 GoRPCClient: error on JSON-RPC call 00:19:39.669 07:12:23 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:39.669 07:12:23 -- common/autotest_common.sh@643 -- # es=1 00:19:39.669 07:12:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:39.669 07:12:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:39.669 07:12:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:39.669 07:12:23 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:39.669 07:12:23 -- common/autotest_common.sh@640 -- # local es=0 00:19:39.669 07:12:23 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:39.669 07:12:23 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:39.669 07:12:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:39.669 07:12:23 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:39.669 07:12:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:39.669 07:12:23 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:39.669 07:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.669 07:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:39.669 2024/07/11 07:12:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:39.669 request: 00:19:39.669 { 00:19:39.669 "method": "bdev_nvme_attach_controller", 00:19:39.669 "params": { 00:19:39.669 "name": "NVMe0", 00:19:39.669 "trtype": "tcp", 00:19:39.669 "traddr": "10.0.0.2", 00:19:39.669 "hostaddr": "10.0.0.2", 00:19:39.669 "hostsvcid": "60000", 00:19:39.669 "adrfam": "ipv4", 00:19:39.669 "trsvcid": "4420", 00:19:39.669 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:19:39.669 } 00:19:39.669 } 00:19:39.669 Got JSON-RPC error response 00:19:39.669 GoRPCClient: error on JSON-RPC call 00:19:39.669 07:12:23 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:39.669 07:12:23 -- common/autotest_common.sh@643 -- # es=1 00:19:39.669 07:12:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:39.669 07:12:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:39.669 07:12:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:39.669 07:12:23 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:39.669 07:12:23 -- common/autotest_common.sh@640 -- # local es=0 00:19:39.669 07:12:23 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:39.669 07:12:23 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:39.669 07:12:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:39.669 07:12:23 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:39.669 07:12:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:39.669 07:12:23 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:39.669 07:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.669 07:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:39.669 2024/07/11 07:12:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:39.669 request: 00:19:39.669 { 00:19:39.669 "method": "bdev_nvme_attach_controller", 00:19:39.669 "params": { 00:19:39.669 "name": "NVMe0", 00:19:39.669 "trtype": "tcp", 00:19:39.669 "traddr": "10.0.0.2", 00:19:39.669 "hostaddr": "10.0.0.2", 00:19:39.669 "hostsvcid": "60000", 00:19:39.669 "adrfam": "ipv4", 00:19:39.669 "trsvcid": "4420", 00:19:39.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.669 "multipath": "disable" 00:19:39.669 } 00:19:39.669 } 00:19:39.669 Got JSON-RPC error response 00:19:39.669 GoRPCClient: error on JSON-RPC call 00:19:39.669 07:12:23 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:39.669 07:12:23 -- common/autotest_common.sh@643 -- # es=1 00:19:39.669 07:12:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:39.669 07:12:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:39.669 07:12:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:39.669 07:12:23 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:39.669 07:12:23 -- common/autotest_common.sh@640 -- # local es=0 00:19:39.669 07:12:23 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:39.669 07:12:23 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:39.669 07:12:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:39.669 07:12:23 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:39.669 07:12:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:39.669 07:12:23 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:39.669 07:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.669 07:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:39.669 2024/07/11 07:12:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:39.669 request: 00:19:39.669 { 00:19:39.669 "method": "bdev_nvme_attach_controller", 00:19:39.669 "params": { 00:19:39.669 "name": "NVMe0", 00:19:39.669 "trtype": "tcp", 00:19:39.669 "traddr": "10.0.0.2", 00:19:39.669 "hostaddr": "10.0.0.2", 00:19:39.669 "hostsvcid": "60000", 00:19:39.669 "adrfam": "ipv4", 00:19:39.669 "trsvcid": "4420", 00:19:39.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.669 "multipath": "failover" 00:19:39.669 } 00:19:39.669 } 00:19:39.669 Got JSON-RPC error response 00:19:39.669 GoRPCClient: error on JSON-RPC call 00:19:39.669 07:12:23 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:39.669 07:12:23 -- common/autotest_common.sh@643 -- # es=1 00:19:39.669 07:12:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:39.669 07:12:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:39.669 07:12:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:39.669 07:12:23 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:39.669 07:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.669 07:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:39.927 00:19:39.927 07:12:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.927 07:12:23 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:39.927 07:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.927 07:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:39.927 07:12:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.927 07:12:23 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:39.927 07:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.927 07:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:39.927 00:19:39.927 07:12:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.927 07:12:23 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:39.927 07:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.927 07:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:39.927 07:12:23 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:39.927 07:12:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.927 07:12:23 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:39.927 07:12:23 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:41.299 0 00:19:41.299 07:12:25 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:41.299 07:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.299 07:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:41.299 07:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.299 07:12:25 -- host/multicontroller.sh@100 -- # killprocess 81361 00:19:41.299 07:12:25 -- common/autotest_common.sh@926 -- # '[' -z 81361 ']' 00:19:41.299 07:12:25 -- common/autotest_common.sh@930 -- # kill -0 81361 00:19:41.299 07:12:25 -- common/autotest_common.sh@931 -- # uname 00:19:41.299 07:12:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:41.299 07:12:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81361 00:19:41.299 killing process with pid 81361 00:19:41.299 07:12:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:41.299 07:12:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:41.299 07:12:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81361' 00:19:41.299 07:12:25 -- common/autotest_common.sh@945 -- # kill 81361 00:19:41.299 07:12:25 -- common/autotest_common.sh@950 -- # wait 81361 00:19:41.299 07:12:25 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:41.299 07:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.299 07:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:41.299 07:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.299 07:12:25 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:41.299 07:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.299 07:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:41.299 07:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.299 07:12:25 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:41.299 07:12:25 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:41.299 07:12:25 -- common/autotest_common.sh@1597 -- # read -r file 00:19:41.299 07:12:25 -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:19:41.299 07:12:25 -- common/autotest_common.sh@1596 -- # sort -u 00:19:41.299 07:12:25 -- common/autotest_common.sh@1598 -- # cat 00:19:41.299 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:41.299 [2024-07-11 07:12:22.692000] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:41.299 [2024-07-11 07:12:22.692117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81361 ] 00:19:41.299 [2024-07-11 07:12:22.833967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.299 [2024-07-11 07:12:22.940490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.299 [2024-07-11 07:12:23.852210] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name d9dc75d3-137e-4ce2-90ef-6c6d2dcaf302 already exists 00:19:41.299 [2024-07-11 07:12:23.852255] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:d9dc75d3-137e-4ce2-90ef-6c6d2dcaf302 alias for bdev NVMe1n1 00:19:41.299 [2024-07-11 07:12:23.852290] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:41.299 Running I/O for 1 seconds... 00:19:41.299 00:19:41.299 Latency(us) 00:19:41.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.299 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:41.300 NVMe0n1 : 1.00 23136.22 90.38 0.00 0.00 5525.39 2517.18 9711.24 00:19:41.300 =================================================================================================================== 00:19:41.300 Total : 23136.22 90.38 0.00 0.00 5525.39 2517.18 9711.24 00:19:41.300 Received shutdown signal, test time was about 1.000000 seconds 00:19:41.300 00:19:41.300 Latency(us) 00:19:41.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.300 =================================================================================================================== 00:19:41.300 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:41.300 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:41.300 07:12:25 -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:41.300 07:12:25 -- common/autotest_common.sh@1597 -- # read -r file 00:19:41.300 07:12:25 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:41.300 07:12:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:41.300 07:12:25 -- nvmf/common.sh@116 -- # sync 00:19:41.558 07:12:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:41.558 07:12:25 -- nvmf/common.sh@119 -- # set +e 00:19:41.558 07:12:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:41.558 07:12:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:41.558 rmmod nvme_tcp 00:19:41.558 rmmod nvme_fabrics 00:19:41.558 rmmod nvme_keyring 00:19:41.558 07:12:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:41.558 07:12:25 -- nvmf/common.sh@123 -- # set -e 00:19:41.558 07:12:25 -- nvmf/common.sh@124 -- # return 0 00:19:41.558 07:12:25 -- nvmf/common.sh@477 -- # '[' -n 81309 ']' 00:19:41.558 07:12:25 -- nvmf/common.sh@478 -- # killprocess 81309 00:19:41.558 07:12:25 -- common/autotest_common.sh@926 -- # '[' -z 81309 ']' 00:19:41.558 07:12:25 -- common/autotest_common.sh@930 -- # kill -0 81309 00:19:41.558 07:12:25 -- common/autotest_common.sh@931 -- # uname 00:19:41.559 07:12:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:41.559 07:12:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81309 00:19:41.559 killing process with pid 81309 00:19:41.559 07:12:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:41.559 07:12:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:41.559 07:12:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81309' 00:19:41.559 07:12:25 -- common/autotest_common.sh@945 -- # kill 81309 00:19:41.559 07:12:25 -- common/autotest_common.sh@950 -- # wait 81309 00:19:41.817 07:12:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:41.817 07:12:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:41.817 07:12:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:41.817 07:12:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.817 07:12:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:41.817 07:12:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.817 07:12:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.817 07:12:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.817 07:12:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:41.817 00:19:41.817 real 0m4.749s 00:19:41.817 user 0m14.733s 00:19:41.817 sys 0m1.056s 00:19:41.817 07:12:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.817 07:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:41.817 ************************************ 00:19:41.817 END TEST nvmf_multicontroller 00:19:41.817 ************************************ 00:19:41.817 07:12:25 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:41.817 07:12:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:41.817 07:12:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:41.817 07:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:41.817 ************************************ 00:19:41.817 START TEST nvmf_aer 00:19:41.817 ************************************ 00:19:41.817 07:12:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:42.076 * Looking for test storage... 00:19:42.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:42.076 07:12:25 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:42.076 07:12:25 -- nvmf/common.sh@7 -- # uname -s 00:19:42.076 07:12:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.076 07:12:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.076 07:12:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.076 07:12:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.076 07:12:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.076 07:12:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.076 07:12:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.076 07:12:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.076 07:12:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.076 07:12:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.076 07:12:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:19:42.076 07:12:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:19:42.076 07:12:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.076 07:12:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.076 07:12:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:42.076 07:12:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:42.076 07:12:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.076 07:12:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.076 07:12:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.076 07:12:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.076 07:12:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.076 07:12:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.076 07:12:25 -- paths/export.sh@5 -- # export PATH 00:19:42.076 07:12:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.076 07:12:25 -- nvmf/common.sh@46 -- # : 0 00:19:42.076 07:12:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:42.076 07:12:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:42.076 07:12:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:42.076 07:12:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.076 07:12:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.076 07:12:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:42.076 07:12:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:42.076 07:12:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:42.076 07:12:25 -- host/aer.sh@11 -- # nvmftestinit 00:19:42.076 07:12:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:42.076 07:12:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.076 07:12:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:42.076 07:12:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:42.076 07:12:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:42.076 07:12:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.076 07:12:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.076 07:12:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.076 07:12:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:42.076 07:12:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:42.076 07:12:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:42.076 07:12:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:42.076 07:12:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:42.076 07:12:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:42.076 07:12:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.076 07:12:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.076 07:12:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:42.076 07:12:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:42.076 07:12:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:42.076 07:12:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:42.076 07:12:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:42.076 07:12:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.076 07:12:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:42.076 07:12:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:42.076 07:12:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:42.076 07:12:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:42.076 07:12:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:42.076 07:12:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:42.076 Cannot find device "nvmf_tgt_br" 00:19:42.076 07:12:25 -- nvmf/common.sh@154 -- # true 00:19:42.076 07:12:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.076 Cannot find device "nvmf_tgt_br2" 00:19:42.076 07:12:25 -- nvmf/common.sh@155 -- # true 00:19:42.076 07:12:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:42.076 07:12:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:42.076 Cannot find device "nvmf_tgt_br" 00:19:42.076 07:12:25 -- nvmf/common.sh@157 -- # true 00:19:42.076 07:12:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:42.076 Cannot find device "nvmf_tgt_br2" 00:19:42.076 07:12:25 -- nvmf/common.sh@158 -- # true 00:19:42.076 07:12:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:42.076 07:12:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:42.076 07:12:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:42.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:42.077 07:12:26 -- nvmf/common.sh@161 -- # true 00:19:42.077 07:12:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:42.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:42.077 07:12:26 -- nvmf/common.sh@162 -- # true 00:19:42.077 07:12:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:42.077 07:12:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:42.077 07:12:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:42.077 07:12:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:42.077 07:12:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:42.077 07:12:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:42.077 07:12:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:42.077 07:12:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:42.077 07:12:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:42.077 07:12:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:42.077 07:12:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:42.077 07:12:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:42.077 07:12:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:42.077 07:12:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:42.077 07:12:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:42.335 07:12:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:42.335 07:12:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:42.335 07:12:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:42.335 07:12:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:42.335 07:12:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:42.335 07:12:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:42.335 07:12:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:42.335 07:12:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:42.335 07:12:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:42.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:19:42.335 00:19:42.335 --- 10.0.0.2 ping statistics --- 00:19:42.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.335 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:42.335 07:12:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:42.335 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:42.335 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:19:42.335 00:19:42.335 --- 10.0.0.3 ping statistics --- 00:19:42.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.335 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:42.335 07:12:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:42.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:42.335 00:19:42.335 --- 10.0.0.1 ping statistics --- 00:19:42.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.335 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:42.335 07:12:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.335 07:12:26 -- nvmf/common.sh@421 -- # return 0 00:19:42.335 07:12:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:42.335 07:12:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.335 07:12:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:42.335 07:12:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:42.335 07:12:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.335 07:12:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:42.335 07:12:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:42.335 07:12:26 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:42.335 07:12:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:42.335 07:12:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:42.335 07:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:42.335 07:12:26 -- nvmf/common.sh@469 -- # nvmfpid=81608 00:19:42.335 07:12:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:42.335 07:12:26 -- nvmf/common.sh@470 -- # waitforlisten 81608 00:19:42.335 07:12:26 -- common/autotest_common.sh@819 -- # '[' -z 81608 ']' 00:19:42.335 07:12:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.335 07:12:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:42.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.335 07:12:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.335 07:12:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:42.335 07:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:42.335 [2024-07-11 07:12:26.296509] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:42.335 [2024-07-11 07:12:26.296598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.593 [2024-07-11 07:12:26.437634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:42.593 [2024-07-11 07:12:26.510530] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:42.593 [2024-07-11 07:12:26.510874] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.593 [2024-07-11 07:12:26.510895] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.593 [2024-07-11 07:12:26.510907] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.593 [2024-07-11 07:12:26.511097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.593 [2024-07-11 07:12:26.511403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.593 [2024-07-11 07:12:26.511525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.593 [2024-07-11 07:12:26.511517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.158 07:12:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:43.158 07:12:27 -- common/autotest_common.sh@852 -- # return 0 00:19:43.158 07:12:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:43.158 07:12:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:43.158 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:43.417 07:12:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.417 07:12:27 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:43.417 07:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.417 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:43.417 [2024-07-11 07:12:27.255018] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.417 07:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.417 07:12:27 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:43.417 07:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.417 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:43.417 Malloc0 00:19:43.417 07:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.417 07:12:27 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:43.417 07:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.417 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:43.417 07:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.417 07:12:27 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:43.417 07:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.417 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:43.417 07:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.417 07:12:27 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:43.417 07:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.417 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:43.417 [2024-07-11 07:12:27.332571] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.417 07:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.417 07:12:27 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:43.417 07:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.417 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:43.417 [2024-07-11 07:12:27.340340] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:43.417 [ 00:19:43.417 { 00:19:43.417 "allow_any_host": true, 00:19:43.417 "hosts": [], 00:19:43.417 "listen_addresses": [], 00:19:43.417 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:43.417 "subtype": "Discovery" 00:19:43.417 }, 00:19:43.417 { 00:19:43.417 "allow_any_host": true, 00:19:43.417 "hosts": [], 00:19:43.417 "listen_addresses": [ 00:19:43.417 { 00:19:43.417 "adrfam": "IPv4", 00:19:43.417 "traddr": "10.0.0.2", 00:19:43.417 "transport": "TCP", 00:19:43.417 "trsvcid": "4420", 00:19:43.417 "trtype": "TCP" 00:19:43.417 } 00:19:43.417 ], 00:19:43.417 "max_cntlid": 65519, 00:19:43.417 "max_namespaces": 2, 00:19:43.417 "min_cntlid": 1, 00:19:43.417 "model_number": "SPDK bdev Controller", 00:19:43.417 "namespaces": [ 00:19:43.417 { 00:19:43.417 "bdev_name": "Malloc0", 00:19:43.417 "name": "Malloc0", 00:19:43.417 "nguid": "7DF5B97D795742A1BB88DE402497F5C6", 00:19:43.417 "nsid": 1, 00:19:43.417 "uuid": "7df5b97d-7957-42a1-bb88-de402497f5c6" 00:19:43.417 } 00:19:43.417 ], 00:19:43.417 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.417 "serial_number": "SPDK00000000000001", 00:19:43.417 "subtype": "NVMe" 00:19:43.417 } 00:19:43.417 ] 00:19:43.417 07:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.417 07:12:27 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:43.417 07:12:27 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:43.417 07:12:27 -- host/aer.sh@33 -- # aerpid=81661 00:19:43.417 07:12:27 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:43.417 07:12:27 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:43.417 07:12:27 -- common/autotest_common.sh@1244 -- # local i=0 00:19:43.417 07:12:27 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:43.417 07:12:27 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:19:43.417 07:12:27 -- common/autotest_common.sh@1247 -- # i=1 00:19:43.417 07:12:27 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:19:43.417 07:12:27 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:43.417 07:12:27 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:19:43.417 07:12:27 -- common/autotest_common.sh@1247 -- # i=2 00:19:43.417 07:12:27 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:19:43.676 07:12:27 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:43.676 07:12:27 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:43.676 07:12:27 -- common/autotest_common.sh@1255 -- # return 0 00:19:43.676 07:12:27 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:43.676 07:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.676 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:43.676 Malloc1 00:19:43.676 07:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.676 07:12:27 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:43.676 07:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.676 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:43.676 07:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.676 07:12:27 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:43.676 07:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.676 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:43.676 Asynchronous Event Request test 00:19:43.676 Attaching to 10.0.0.2 00:19:43.676 Attached to 10.0.0.2 00:19:43.676 Registering asynchronous event callbacks... 00:19:43.676 Starting namespace attribute notice tests for all controllers... 00:19:43.676 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:43.676 aer_cb - Changed Namespace 00:19:43.676 Cleaning up... 00:19:43.676 [ 00:19:43.676 { 00:19:43.676 "allow_any_host": true, 00:19:43.676 "hosts": [], 00:19:43.676 "listen_addresses": [], 00:19:43.676 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:43.676 "subtype": "Discovery" 00:19:43.676 }, 00:19:43.676 { 00:19:43.676 "allow_any_host": true, 00:19:43.676 "hosts": [], 00:19:43.676 "listen_addresses": [ 00:19:43.676 { 00:19:43.676 "adrfam": "IPv4", 00:19:43.676 "traddr": "10.0.0.2", 00:19:43.676 "transport": "TCP", 00:19:43.676 "trsvcid": "4420", 00:19:43.676 "trtype": "TCP" 00:19:43.676 } 00:19:43.676 ], 00:19:43.676 "max_cntlid": 65519, 00:19:43.676 "max_namespaces": 2, 00:19:43.676 "min_cntlid": 1, 00:19:43.676 "model_number": "SPDK bdev Controller", 00:19:43.676 "namespaces": [ 00:19:43.676 { 00:19:43.676 "bdev_name": "Malloc0", 00:19:43.676 "name": "Malloc0", 00:19:43.676 "nguid": "7DF5B97D795742A1BB88DE402497F5C6", 00:19:43.676 "nsid": 1, 00:19:43.676 "uuid": "7df5b97d-7957-42a1-bb88-de402497f5c6" 00:19:43.676 }, 00:19:43.676 { 00:19:43.676 "bdev_name": "Malloc1", 00:19:43.676 "name": "Malloc1", 00:19:43.676 "nguid": "3B447C21DD9643B487771C7E1D5C8497", 00:19:43.676 "nsid": 2, 00:19:43.676 "uuid": "3b447c21-dd96-43b4-8777-1c7e1d5c8497" 00:19:43.676 } 00:19:43.676 ], 00:19:43.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.676 "serial_number": "SPDK00000000000001", 00:19:43.676 "subtype": "NVMe" 00:19:43.676 } 00:19:43.676 ] 00:19:43.676 07:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.676 07:12:27 -- host/aer.sh@43 -- # wait 81661 00:19:43.676 07:12:27 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:43.676 07:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.676 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:43.676 07:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.676 07:12:27 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:43.676 07:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.676 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:43.936 07:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.936 07:12:27 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:43.936 07:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.936 07:12:27 -- common/autotest_common.sh@10 -- # set +x 00:19:43.936 07:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.936 07:12:27 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:43.936 07:12:27 -- host/aer.sh@51 -- # nvmftestfini 00:19:43.936 07:12:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:43.936 07:12:27 -- nvmf/common.sh@116 -- # sync 00:19:43.936 07:12:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:43.936 07:12:27 -- nvmf/common.sh@119 -- # set +e 00:19:43.936 07:12:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:43.936 07:12:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:43.936 rmmod nvme_tcp 00:19:43.936 rmmod nvme_fabrics 00:19:43.936 rmmod nvme_keyring 00:19:43.936 07:12:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:43.936 07:12:27 -- nvmf/common.sh@123 -- # set -e 00:19:43.936 07:12:27 -- nvmf/common.sh@124 -- # return 0 00:19:43.936 07:12:27 -- nvmf/common.sh@477 -- # '[' -n 81608 ']' 00:19:43.936 07:12:27 -- nvmf/common.sh@478 -- # killprocess 81608 00:19:43.936 07:12:27 -- common/autotest_common.sh@926 -- # '[' -z 81608 ']' 00:19:43.936 07:12:27 -- common/autotest_common.sh@930 -- # kill -0 81608 00:19:43.936 07:12:27 -- common/autotest_common.sh@931 -- # uname 00:19:43.936 07:12:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:43.936 07:12:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81608 00:19:43.936 07:12:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:43.936 07:12:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:43.936 killing process with pid 81608 00:19:43.936 07:12:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81608' 00:19:43.936 07:12:27 -- common/autotest_common.sh@945 -- # kill 81608 00:19:43.936 [2024-07-11 07:12:27.881560] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:43.936 07:12:27 -- common/autotest_common.sh@950 -- # wait 81608 00:19:44.195 07:12:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:44.195 07:12:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:44.195 07:12:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:44.195 07:12:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.195 07:12:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:44.195 07:12:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.195 07:12:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.195 07:12:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.195 07:12:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:44.195 00:19:44.195 real 0m2.326s 00:19:44.195 user 0m6.452s 00:19:44.195 sys 0m0.645s 00:19:44.195 07:12:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:44.195 ************************************ 00:19:44.195 END TEST nvmf_aer 00:19:44.195 ************************************ 00:19:44.195 07:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:44.195 07:12:28 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:44.195 07:12:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:44.195 07:12:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:44.195 07:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:44.195 ************************************ 00:19:44.195 START TEST nvmf_async_init 00:19:44.195 ************************************ 00:19:44.195 07:12:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:44.453 * Looking for test storage... 00:19:44.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:44.453 07:12:28 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:44.453 07:12:28 -- nvmf/common.sh@7 -- # uname -s 00:19:44.453 07:12:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.453 07:12:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.453 07:12:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.453 07:12:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.453 07:12:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.453 07:12:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.453 07:12:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.453 07:12:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.453 07:12:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.453 07:12:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.453 07:12:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:19:44.453 07:12:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:19:44.453 07:12:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.453 07:12:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.453 07:12:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:44.453 07:12:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:44.453 07:12:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.453 07:12:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.453 07:12:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.453 07:12:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.454 07:12:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.454 07:12:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.454 07:12:28 -- paths/export.sh@5 -- # export PATH 00:19:44.454 07:12:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.454 07:12:28 -- nvmf/common.sh@46 -- # : 0 00:19:44.454 07:12:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:44.454 07:12:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:44.454 07:12:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:44.454 07:12:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.454 07:12:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.454 07:12:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:44.454 07:12:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:44.454 07:12:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:44.454 07:12:28 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:44.454 07:12:28 -- host/async_init.sh@14 -- # null_block_size=512 00:19:44.454 07:12:28 -- host/async_init.sh@15 -- # null_bdev=null0 00:19:44.454 07:12:28 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:44.454 07:12:28 -- host/async_init.sh@20 -- # uuidgen 00:19:44.454 07:12:28 -- host/async_init.sh@20 -- # tr -d - 00:19:44.454 07:12:28 -- host/async_init.sh@20 -- # nguid=396e4bab846f492f825f7e40aa3b6ef7 00:19:44.454 07:12:28 -- host/async_init.sh@22 -- # nvmftestinit 00:19:44.454 07:12:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:44.454 07:12:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.454 07:12:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:44.454 07:12:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:44.454 07:12:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:44.454 07:12:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.454 07:12:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.454 07:12:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.454 07:12:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:44.454 07:12:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:44.454 07:12:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:44.454 07:12:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:44.454 07:12:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:44.454 07:12:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:44.454 07:12:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.454 07:12:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.454 07:12:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:44.454 07:12:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:44.454 07:12:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:44.454 07:12:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:44.454 07:12:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:44.454 07:12:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.454 07:12:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:44.454 07:12:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:44.454 07:12:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:44.454 07:12:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:44.454 07:12:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:44.454 07:12:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:44.454 Cannot find device "nvmf_tgt_br" 00:19:44.454 07:12:28 -- nvmf/common.sh@154 -- # true 00:19:44.454 07:12:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:44.454 Cannot find device "nvmf_tgt_br2" 00:19:44.454 07:12:28 -- nvmf/common.sh@155 -- # true 00:19:44.454 07:12:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:44.454 07:12:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:44.454 Cannot find device "nvmf_tgt_br" 00:19:44.454 07:12:28 -- nvmf/common.sh@157 -- # true 00:19:44.454 07:12:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:44.454 Cannot find device "nvmf_tgt_br2" 00:19:44.454 07:12:28 -- nvmf/common.sh@158 -- # true 00:19:44.454 07:12:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:44.454 07:12:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:44.454 07:12:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:44.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.454 07:12:28 -- nvmf/common.sh@161 -- # true 00:19:44.454 07:12:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:44.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.454 07:12:28 -- nvmf/common.sh@162 -- # true 00:19:44.454 07:12:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:44.454 07:12:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:44.454 07:12:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:44.454 07:12:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:44.454 07:12:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:44.454 07:12:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:44.712 07:12:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:44.712 07:12:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:44.712 07:12:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:44.713 07:12:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:44.713 07:12:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:44.713 07:12:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:44.713 07:12:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:44.713 07:12:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:44.713 07:12:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:44.713 07:12:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:44.713 07:12:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:44.713 07:12:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:44.713 07:12:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:44.713 07:12:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:44.713 07:12:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:44.713 07:12:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:44.713 07:12:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:44.713 07:12:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:44.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:19:44.713 00:19:44.713 --- 10.0.0.2 ping statistics --- 00:19:44.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.713 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:19:44.713 07:12:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:44.713 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:44.713 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:19:44.713 00:19:44.713 --- 10.0.0.3 ping statistics --- 00:19:44.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.713 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:44.713 07:12:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:44.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:44.713 00:19:44.713 --- 10.0.0.1 ping statistics --- 00:19:44.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.713 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:44.713 07:12:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.713 07:12:28 -- nvmf/common.sh@421 -- # return 0 00:19:44.713 07:12:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:44.713 07:12:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.713 07:12:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:44.713 07:12:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:44.713 07:12:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.713 07:12:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:44.713 07:12:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:44.713 07:12:28 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:44.713 07:12:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:44.713 07:12:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:44.713 07:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:44.713 07:12:28 -- nvmf/common.sh@469 -- # nvmfpid=81831 00:19:44.713 07:12:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:44.713 07:12:28 -- nvmf/common.sh@470 -- # waitforlisten 81831 00:19:44.713 07:12:28 -- common/autotest_common.sh@819 -- # '[' -z 81831 ']' 00:19:44.713 07:12:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.713 07:12:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:44.713 07:12:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.713 07:12:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:44.713 07:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:44.713 [2024-07-11 07:12:28.737440] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:44.713 [2024-07-11 07:12:28.737542] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.971 [2024-07-11 07:12:28.876491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.971 [2024-07-11 07:12:28.953560] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:44.971 [2024-07-11 07:12:28.953697] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.971 [2024-07-11 07:12:28.953709] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.971 [2024-07-11 07:12:28.953717] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.971 [2024-07-11 07:12:28.953746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.904 07:12:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:45.904 07:12:29 -- common/autotest_common.sh@852 -- # return 0 00:19:45.904 07:12:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:45.904 07:12:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:45.904 07:12:29 -- common/autotest_common.sh@10 -- # set +x 00:19:45.904 07:12:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.904 07:12:29 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:45.904 07:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.904 07:12:29 -- common/autotest_common.sh@10 -- # set +x 00:19:45.904 [2024-07-11 07:12:29.716567] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.904 07:12:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:45.904 07:12:29 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:45.904 07:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.904 07:12:29 -- common/autotest_common.sh@10 -- # set +x 00:19:45.904 null0 00:19:45.904 07:12:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:45.904 07:12:29 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:45.904 07:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.904 07:12:29 -- common/autotest_common.sh@10 -- # set +x 00:19:45.904 07:12:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:45.904 07:12:29 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:45.904 07:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.904 07:12:29 -- common/autotest_common.sh@10 -- # set +x 00:19:45.904 07:12:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:45.904 07:12:29 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 396e4bab846f492f825f7e40aa3b6ef7 00:19:45.904 07:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.904 07:12:29 -- common/autotest_common.sh@10 -- # set +x 00:19:45.904 07:12:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:45.904 07:12:29 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:45.904 07:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.904 07:12:29 -- common/autotest_common.sh@10 -- # set +x 00:19:45.904 [2024-07-11 07:12:29.756663] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.904 07:12:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:45.904 07:12:29 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:45.904 07:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.904 07:12:29 -- common/autotest_common.sh@10 -- # set +x 00:19:46.163 nvme0n1 00:19:46.163 07:12:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.163 07:12:29 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:46.163 07:12:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.163 07:12:29 -- common/autotest_common.sh@10 -- # set +x 00:19:46.163 [ 00:19:46.163 { 00:19:46.163 "aliases": [ 00:19:46.163 "396e4bab-846f-492f-825f-7e40aa3b6ef7" 00:19:46.163 ], 00:19:46.163 "assigned_rate_limits": { 00:19:46.163 "r_mbytes_per_sec": 0, 00:19:46.163 "rw_ios_per_sec": 0, 00:19:46.163 "rw_mbytes_per_sec": 0, 00:19:46.163 "w_mbytes_per_sec": 0 00:19:46.163 }, 00:19:46.163 "block_size": 512, 00:19:46.163 "claimed": false, 00:19:46.163 "driver_specific": { 00:19:46.163 "mp_policy": "active_passive", 00:19:46.163 "nvme": [ 00:19:46.163 { 00:19:46.163 "ctrlr_data": { 00:19:46.163 "ana_reporting": false, 00:19:46.163 "cntlid": 1, 00:19:46.163 "firmware_revision": "24.01.1", 00:19:46.163 "model_number": "SPDK bdev Controller", 00:19:46.163 "multi_ctrlr": true, 00:19:46.163 "oacs": { 00:19:46.163 "firmware": 0, 00:19:46.163 "format": 0, 00:19:46.163 "ns_manage": 0, 00:19:46.163 "security": 0 00:19:46.163 }, 00:19:46.163 "serial_number": "00000000000000000000", 00:19:46.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.163 "vendor_id": "0x8086" 00:19:46.163 }, 00:19:46.163 "ns_data": { 00:19:46.163 "can_share": true, 00:19:46.163 "id": 1 00:19:46.163 }, 00:19:46.163 "trid": { 00:19:46.163 "adrfam": "IPv4", 00:19:46.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.163 "traddr": "10.0.0.2", 00:19:46.163 "trsvcid": "4420", 00:19:46.163 "trtype": "TCP" 00:19:46.163 }, 00:19:46.163 "vs": { 00:19:46.163 "nvme_version": "1.3" 00:19:46.163 } 00:19:46.163 } 00:19:46.163 ] 00:19:46.163 }, 00:19:46.163 "name": "nvme0n1", 00:19:46.163 "num_blocks": 2097152, 00:19:46.163 "product_name": "NVMe disk", 00:19:46.163 "supported_io_types": { 00:19:46.163 "abort": true, 00:19:46.163 "compare": true, 00:19:46.163 "compare_and_write": true, 00:19:46.163 "flush": true, 00:19:46.163 "nvme_admin": true, 00:19:46.163 "nvme_io": true, 00:19:46.163 "read": true, 00:19:46.163 "reset": true, 00:19:46.163 "unmap": false, 00:19:46.163 "write": true, 00:19:46.163 "write_zeroes": true 00:19:46.163 }, 00:19:46.163 "uuid": "396e4bab-846f-492f-825f-7e40aa3b6ef7", 00:19:46.163 "zoned": false 00:19:46.163 } 00:19:46.163 ] 00:19:46.163 07:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.163 07:12:30 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:46.163 07:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.163 07:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:46.163 [2024-07-11 07:12:30.016632] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:46.163 [2024-07-11 07:12:30.016710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23607a0 (9): Bad file descriptor 00:19:46.163 [2024-07-11 07:12:30.148555] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:46.163 07:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.163 07:12:30 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:46.163 07:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.163 07:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:46.163 [ 00:19:46.163 { 00:19:46.163 "aliases": [ 00:19:46.163 "396e4bab-846f-492f-825f-7e40aa3b6ef7" 00:19:46.163 ], 00:19:46.163 "assigned_rate_limits": { 00:19:46.163 "r_mbytes_per_sec": 0, 00:19:46.163 "rw_ios_per_sec": 0, 00:19:46.163 "rw_mbytes_per_sec": 0, 00:19:46.163 "w_mbytes_per_sec": 0 00:19:46.163 }, 00:19:46.163 "block_size": 512, 00:19:46.163 "claimed": false, 00:19:46.163 "driver_specific": { 00:19:46.163 "mp_policy": "active_passive", 00:19:46.163 "nvme": [ 00:19:46.163 { 00:19:46.163 "ctrlr_data": { 00:19:46.163 "ana_reporting": false, 00:19:46.163 "cntlid": 2, 00:19:46.163 "firmware_revision": "24.01.1", 00:19:46.163 "model_number": "SPDK bdev Controller", 00:19:46.163 "multi_ctrlr": true, 00:19:46.163 "oacs": { 00:19:46.163 "firmware": 0, 00:19:46.163 "format": 0, 00:19:46.163 "ns_manage": 0, 00:19:46.163 "security": 0 00:19:46.163 }, 00:19:46.163 "serial_number": "00000000000000000000", 00:19:46.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.163 "vendor_id": "0x8086" 00:19:46.163 }, 00:19:46.163 "ns_data": { 00:19:46.163 "can_share": true, 00:19:46.163 "id": 1 00:19:46.163 }, 00:19:46.163 "trid": { 00:19:46.163 "adrfam": "IPv4", 00:19:46.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.163 "traddr": "10.0.0.2", 00:19:46.163 "trsvcid": "4420", 00:19:46.163 "trtype": "TCP" 00:19:46.163 }, 00:19:46.163 "vs": { 00:19:46.163 "nvme_version": "1.3" 00:19:46.163 } 00:19:46.163 } 00:19:46.163 ] 00:19:46.163 }, 00:19:46.163 "name": "nvme0n1", 00:19:46.163 "num_blocks": 2097152, 00:19:46.163 "product_name": "NVMe disk", 00:19:46.163 "supported_io_types": { 00:19:46.163 "abort": true, 00:19:46.163 "compare": true, 00:19:46.163 "compare_and_write": true, 00:19:46.163 "flush": true, 00:19:46.163 "nvme_admin": true, 00:19:46.163 "nvme_io": true, 00:19:46.163 "read": true, 00:19:46.163 "reset": true, 00:19:46.163 "unmap": false, 00:19:46.163 "write": true, 00:19:46.163 "write_zeroes": true 00:19:46.163 }, 00:19:46.163 "uuid": "396e4bab-846f-492f-825f-7e40aa3b6ef7", 00:19:46.163 "zoned": false 00:19:46.163 } 00:19:46.163 ] 00:19:46.163 07:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.163 07:12:30 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.163 07:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.163 07:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:46.163 07:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.163 07:12:30 -- host/async_init.sh@53 -- # mktemp 00:19:46.163 07:12:30 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UZ6gYLl1rw 00:19:46.163 07:12:30 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:46.163 07:12:30 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UZ6gYLl1rw 00:19:46.163 07:12:30 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:46.163 07:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.163 07:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:46.164 07:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.164 07:12:30 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:46.164 07:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.164 07:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:46.164 [2024-07-11 07:12:30.216792] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.164 [2024-07-11 07:12:30.216934] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:46.422 07:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.422 07:12:30 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UZ6gYLl1rw 00:19:46.422 07:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.422 07:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:46.422 07:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.422 07:12:30 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UZ6gYLl1rw 00:19:46.422 07:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.422 07:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:46.422 [2024-07-11 07:12:30.232772] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.422 nvme0n1 00:19:46.422 07:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.422 07:12:30 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:46.422 07:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.422 07:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:46.422 [ 00:19:46.422 { 00:19:46.422 "aliases": [ 00:19:46.422 "396e4bab-846f-492f-825f-7e40aa3b6ef7" 00:19:46.422 ], 00:19:46.422 "assigned_rate_limits": { 00:19:46.422 "r_mbytes_per_sec": 0, 00:19:46.422 "rw_ios_per_sec": 0, 00:19:46.422 "rw_mbytes_per_sec": 0, 00:19:46.422 "w_mbytes_per_sec": 0 00:19:46.422 }, 00:19:46.422 "block_size": 512, 00:19:46.422 "claimed": false, 00:19:46.422 "driver_specific": { 00:19:46.422 "mp_policy": "active_passive", 00:19:46.422 "nvme": [ 00:19:46.422 { 00:19:46.422 "ctrlr_data": { 00:19:46.422 "ana_reporting": false, 00:19:46.422 "cntlid": 3, 00:19:46.422 "firmware_revision": "24.01.1", 00:19:46.422 "model_number": "SPDK bdev Controller", 00:19:46.422 "multi_ctrlr": true, 00:19:46.422 "oacs": { 00:19:46.422 "firmware": 0, 00:19:46.422 "format": 0, 00:19:46.422 "ns_manage": 0, 00:19:46.422 "security": 0 00:19:46.422 }, 00:19:46.422 "serial_number": "00000000000000000000", 00:19:46.422 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.422 "vendor_id": "0x8086" 00:19:46.422 }, 00:19:46.422 "ns_data": { 00:19:46.422 "can_share": true, 00:19:46.422 "id": 1 00:19:46.422 }, 00:19:46.422 "trid": { 00:19:46.422 "adrfam": "IPv4", 00:19:46.422 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.422 "traddr": "10.0.0.2", 00:19:46.422 "trsvcid": "4421", 00:19:46.422 "trtype": "TCP" 00:19:46.422 }, 00:19:46.422 "vs": { 00:19:46.422 "nvme_version": "1.3" 00:19:46.422 } 00:19:46.422 } 00:19:46.422 ] 00:19:46.422 }, 00:19:46.422 "name": "nvme0n1", 00:19:46.422 "num_blocks": 2097152, 00:19:46.422 "product_name": "NVMe disk", 00:19:46.422 "supported_io_types": { 00:19:46.422 "abort": true, 00:19:46.422 "compare": true, 00:19:46.422 "compare_and_write": true, 00:19:46.422 "flush": true, 00:19:46.422 "nvme_admin": true, 00:19:46.422 "nvme_io": true, 00:19:46.423 "read": true, 00:19:46.423 "reset": true, 00:19:46.423 "unmap": false, 00:19:46.423 "write": true, 00:19:46.423 "write_zeroes": true 00:19:46.423 }, 00:19:46.423 "uuid": "396e4bab-846f-492f-825f-7e40aa3b6ef7", 00:19:46.423 "zoned": false 00:19:46.423 } 00:19:46.423 ] 00:19:46.423 07:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.423 07:12:30 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.423 07:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.423 07:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:46.423 07:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.423 07:12:30 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.UZ6gYLl1rw 00:19:46.423 07:12:30 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:46.423 07:12:30 -- host/async_init.sh@78 -- # nvmftestfini 00:19:46.423 07:12:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:46.423 07:12:30 -- nvmf/common.sh@116 -- # sync 00:19:46.423 07:12:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:46.423 07:12:30 -- nvmf/common.sh@119 -- # set +e 00:19:46.423 07:12:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:46.423 07:12:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:46.423 rmmod nvme_tcp 00:19:46.423 rmmod nvme_fabrics 00:19:46.423 rmmod nvme_keyring 00:19:46.423 07:12:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:46.423 07:12:30 -- nvmf/common.sh@123 -- # set -e 00:19:46.423 07:12:30 -- nvmf/common.sh@124 -- # return 0 00:19:46.423 07:12:30 -- nvmf/common.sh@477 -- # '[' -n 81831 ']' 00:19:46.423 07:12:30 -- nvmf/common.sh@478 -- # killprocess 81831 00:19:46.423 07:12:30 -- common/autotest_common.sh@926 -- # '[' -z 81831 ']' 00:19:46.423 07:12:30 -- common/autotest_common.sh@930 -- # kill -0 81831 00:19:46.423 07:12:30 -- common/autotest_common.sh@931 -- # uname 00:19:46.423 07:12:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:46.423 07:12:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81831 00:19:46.423 killing process with pid 81831 00:19:46.423 07:12:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:46.423 07:12:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:46.423 07:12:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81831' 00:19:46.423 07:12:30 -- common/autotest_common.sh@945 -- # kill 81831 00:19:46.423 07:12:30 -- common/autotest_common.sh@950 -- # wait 81831 00:19:46.681 07:12:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:46.681 07:12:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:46.681 07:12:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:46.681 07:12:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.681 07:12:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:46.681 07:12:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.681 07:12:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.681 07:12:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.681 07:12:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:46.681 00:19:46.681 real 0m2.503s 00:19:46.681 user 0m2.275s 00:19:46.681 sys 0m0.595s 00:19:46.681 07:12:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.681 ************************************ 00:19:46.681 END TEST nvmf_async_init 00:19:46.681 ************************************ 00:19:46.681 07:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:46.939 07:12:30 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:46.939 07:12:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:46.939 07:12:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:46.939 07:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:46.939 ************************************ 00:19:46.939 START TEST dma 00:19:46.939 ************************************ 00:19:46.939 07:12:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:46.939 * Looking for test storage... 00:19:46.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:46.939 07:12:30 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:46.940 07:12:30 -- nvmf/common.sh@7 -- # uname -s 00:19:46.940 07:12:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.940 07:12:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.940 07:12:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.940 07:12:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.940 07:12:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.940 07:12:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.940 07:12:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.940 07:12:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.940 07:12:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.940 07:12:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.940 07:12:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:19:46.940 07:12:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:19:46.940 07:12:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.940 07:12:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.940 07:12:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:46.940 07:12:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:46.940 07:12:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.940 07:12:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.940 07:12:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.940 07:12:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.940 07:12:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.940 07:12:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.940 07:12:30 -- paths/export.sh@5 -- # export PATH 00:19:46.940 07:12:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.940 07:12:30 -- nvmf/common.sh@46 -- # : 0 00:19:46.940 07:12:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:46.940 07:12:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:46.940 07:12:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:46.940 07:12:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.940 07:12:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.940 07:12:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:46.940 07:12:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:46.940 07:12:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:46.940 07:12:30 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:46.940 07:12:30 -- host/dma.sh@13 -- # exit 0 00:19:46.940 00:19:46.940 real 0m0.097s 00:19:46.940 user 0m0.044s 00:19:46.940 sys 0m0.060s 00:19:46.940 07:12:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.940 07:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:46.940 ************************************ 00:19:46.940 END TEST dma 00:19:46.940 ************************************ 00:19:46.940 07:12:30 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:46.940 07:12:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:46.940 07:12:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:46.940 07:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:46.940 ************************************ 00:19:46.940 START TEST nvmf_identify 00:19:46.940 ************************************ 00:19:46.940 07:12:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:46.940 * Looking for test storage... 00:19:46.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:46.940 07:12:30 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:46.940 07:12:30 -- nvmf/common.sh@7 -- # uname -s 00:19:46.940 07:12:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.940 07:12:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.940 07:12:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.940 07:12:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.940 07:12:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.940 07:12:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.940 07:12:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.940 07:12:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.940 07:12:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.940 07:12:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.198 07:12:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:19:47.198 07:12:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:19:47.198 07:12:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.198 07:12:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.198 07:12:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.198 07:12:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.198 07:12:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.198 07:12:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.198 07:12:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.198 07:12:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.198 07:12:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.198 07:12:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.198 07:12:31 -- paths/export.sh@5 -- # export PATH 00:19:47.199 07:12:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.199 07:12:31 -- nvmf/common.sh@46 -- # : 0 00:19:47.199 07:12:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:47.199 07:12:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:47.199 07:12:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:47.199 07:12:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.199 07:12:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.199 07:12:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:47.199 07:12:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:47.199 07:12:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:47.199 07:12:31 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:47.199 07:12:31 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:47.199 07:12:31 -- host/identify.sh@14 -- # nvmftestinit 00:19:47.199 07:12:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:47.199 07:12:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.199 07:12:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:47.199 07:12:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:47.199 07:12:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:47.199 07:12:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.199 07:12:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.199 07:12:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.199 07:12:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:47.199 07:12:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:47.199 07:12:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:47.199 07:12:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:47.199 07:12:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:47.199 07:12:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:47.199 07:12:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.199 07:12:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.199 07:12:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:47.199 07:12:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:47.199 07:12:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.199 07:12:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.199 07:12:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.199 07:12:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.199 07:12:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.199 07:12:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.199 07:12:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.199 07:12:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.199 07:12:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:47.199 07:12:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:47.199 Cannot find device "nvmf_tgt_br" 00:19:47.199 07:12:31 -- nvmf/common.sh@154 -- # true 00:19:47.199 07:12:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.199 Cannot find device "nvmf_tgt_br2" 00:19:47.199 07:12:31 -- nvmf/common.sh@155 -- # true 00:19:47.199 07:12:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:47.199 07:12:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:47.199 Cannot find device "nvmf_tgt_br" 00:19:47.199 07:12:31 -- nvmf/common.sh@157 -- # true 00:19:47.199 07:12:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:47.199 Cannot find device "nvmf_tgt_br2" 00:19:47.199 07:12:31 -- nvmf/common.sh@158 -- # true 00:19:47.199 07:12:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:47.199 07:12:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:47.199 07:12:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.199 07:12:31 -- nvmf/common.sh@161 -- # true 00:19:47.199 07:12:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.199 07:12:31 -- nvmf/common.sh@162 -- # true 00:19:47.199 07:12:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:47.199 07:12:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:47.199 07:12:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:47.199 07:12:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:47.199 07:12:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:47.199 07:12:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:47.199 07:12:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:47.199 07:12:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:47.199 07:12:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:47.199 07:12:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:47.199 07:12:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:47.199 07:12:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:47.199 07:12:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:47.199 07:12:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:47.199 07:12:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:47.458 07:12:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:47.458 07:12:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:47.458 07:12:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:47.458 07:12:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:47.458 07:12:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:47.458 07:12:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:47.458 07:12:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:47.458 07:12:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:47.458 07:12:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:47.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:19:47.458 00:19:47.458 --- 10.0.0.2 ping statistics --- 00:19:47.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.458 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:47.458 07:12:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:47.458 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:47.458 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:19:47.458 00:19:47.458 --- 10.0.0.3 ping statistics --- 00:19:47.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.458 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:47.458 07:12:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:47.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:47.458 00:19:47.458 --- 10.0.0.1 ping statistics --- 00:19:47.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.458 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:47.458 07:12:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.458 07:12:31 -- nvmf/common.sh@421 -- # return 0 00:19:47.458 07:12:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:47.458 07:12:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.458 07:12:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:47.458 07:12:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:47.458 07:12:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.458 07:12:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:47.458 07:12:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:47.458 07:12:31 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:47.458 07:12:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:47.458 07:12:31 -- common/autotest_common.sh@10 -- # set +x 00:19:47.458 07:12:31 -- host/identify.sh@19 -- # nvmfpid=82096 00:19:47.458 07:12:31 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:47.458 07:12:31 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.458 07:12:31 -- host/identify.sh@23 -- # waitforlisten 82096 00:19:47.458 07:12:31 -- common/autotest_common.sh@819 -- # '[' -z 82096 ']' 00:19:47.458 07:12:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.458 07:12:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:47.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.458 07:12:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.458 07:12:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:47.458 07:12:31 -- common/autotest_common.sh@10 -- # set +x 00:19:47.458 [2024-07-11 07:12:31.422979] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:47.458 [2024-07-11 07:12:31.423067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.717 [2024-07-11 07:12:31.563625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:47.717 [2024-07-11 07:12:31.653232] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:47.717 [2024-07-11 07:12:31.653359] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.717 [2024-07-11 07:12:31.653371] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.717 [2024-07-11 07:12:31.653381] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.717 [2024-07-11 07:12:31.654043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.717 [2024-07-11 07:12:31.654213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.717 [2024-07-11 07:12:31.654598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:47.717 [2024-07-11 07:12:31.654621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.652 07:12:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:48.652 07:12:32 -- common/autotest_common.sh@852 -- # return 0 00:19:48.652 07:12:32 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:48.652 07:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.652 07:12:32 -- common/autotest_common.sh@10 -- # set +x 00:19:48.652 [2024-07-11 07:12:32.411308] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.652 07:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.652 07:12:32 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:48.652 07:12:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:48.652 07:12:32 -- common/autotest_common.sh@10 -- # set +x 00:19:48.652 07:12:32 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:48.652 07:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.652 07:12:32 -- common/autotest_common.sh@10 -- # set +x 00:19:48.652 Malloc0 00:19:48.652 07:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.652 07:12:32 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:48.652 07:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.652 07:12:32 -- common/autotest_common.sh@10 -- # set +x 00:19:48.652 07:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.652 07:12:32 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:48.652 07:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.652 07:12:32 -- common/autotest_common.sh@10 -- # set +x 00:19:48.652 07:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.652 07:12:32 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.652 07:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.652 07:12:32 -- common/autotest_common.sh@10 -- # set +x 00:19:48.652 [2024-07-11 07:12:32.524359] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.652 07:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.652 07:12:32 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:48.652 07:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.652 07:12:32 -- common/autotest_common.sh@10 -- # set +x 00:19:48.652 07:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.652 07:12:32 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:48.652 07:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.652 07:12:32 -- common/autotest_common.sh@10 -- # set +x 00:19:48.652 [2024-07-11 07:12:32.540143] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:48.652 [ 00:19:48.652 { 00:19:48.652 "allow_any_host": true, 00:19:48.652 "hosts": [], 00:19:48.652 "listen_addresses": [ 00:19:48.652 { 00:19:48.652 "adrfam": "IPv4", 00:19:48.652 "traddr": "10.0.0.2", 00:19:48.652 "transport": "TCP", 00:19:48.652 "trsvcid": "4420", 00:19:48.652 "trtype": "TCP" 00:19:48.652 } 00:19:48.652 ], 00:19:48.652 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:48.652 "subtype": "Discovery" 00:19:48.652 }, 00:19:48.652 { 00:19:48.652 "allow_any_host": true, 00:19:48.652 "hosts": [], 00:19:48.652 "listen_addresses": [ 00:19:48.652 { 00:19:48.652 "adrfam": "IPv4", 00:19:48.652 "traddr": "10.0.0.2", 00:19:48.652 "transport": "TCP", 00:19:48.652 "trsvcid": "4420", 00:19:48.652 "trtype": "TCP" 00:19:48.652 } 00:19:48.652 ], 00:19:48.652 "max_cntlid": 65519, 00:19:48.652 "max_namespaces": 32, 00:19:48.652 "min_cntlid": 1, 00:19:48.652 "model_number": "SPDK bdev Controller", 00:19:48.652 "namespaces": [ 00:19:48.652 { 00:19:48.652 "bdev_name": "Malloc0", 00:19:48.652 "eui64": "ABCDEF0123456789", 00:19:48.652 "name": "Malloc0", 00:19:48.652 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:48.652 "nsid": 1, 00:19:48.652 "uuid": "17302d9a-7e71-49a8-847c-f531207251e6" 00:19:48.652 } 00:19:48.652 ], 00:19:48.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.652 "serial_number": "SPDK00000000000001", 00:19:48.652 "subtype": "NVMe" 00:19:48.652 } 00:19:48.652 ] 00:19:48.652 07:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.652 07:12:32 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:48.652 [2024-07-11 07:12:32.585940] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:48.652 [2024-07-11 07:12:32.586002] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82149 ] 00:19:48.915 [2024-07-11 07:12:32.725509] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:48.915 [2024-07-11 07:12:32.725584] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:48.915 [2024-07-11 07:12:32.725602] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:48.915 [2024-07-11 07:12:32.725613] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:48.915 [2024-07-11 07:12:32.725622] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:48.915 [2024-07-11 07:12:32.725736] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:48.915 [2024-07-11 07:12:32.725802] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x109e270 0 00:19:48.915 [2024-07-11 07:12:32.731512] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:48.915 [2024-07-11 07:12:32.731535] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:48.915 [2024-07-11 07:12:32.731558] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:48.915 [2024-07-11 07:12:32.731561] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:48.915 [2024-07-11 07:12:32.731604] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.915 [2024-07-11 07:12:32.731611] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.915 [2024-07-11 07:12:32.731615] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109e270) 00:19:48.915 [2024-07-11 07:12:32.731626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:48.915 [2024-07-11 07:12:32.731655] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dd6d0, cid 0, qid 0 00:19:48.915 [2024-07-11 07:12:32.739520] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.915 [2024-07-11 07:12:32.739541] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.916 [2024-07-11 07:12:32.739563] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.739568] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10dd6d0) on tqpair=0x109e270 00:19:48.916 [2024-07-11 07:12:32.739579] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:48.916 [2024-07-11 07:12:32.739587] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:48.916 [2024-07-11 07:12:32.739592] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:48.916 [2024-07-11 07:12:32.739608] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.739613] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.739617] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109e270) 00:19:48.916 [2024-07-11 07:12:32.739640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.916 [2024-07-11 07:12:32.739668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dd6d0, cid 0, qid 0 00:19:48.916 [2024-07-11 07:12:32.739745] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.916 [2024-07-11 07:12:32.739751] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.916 [2024-07-11 07:12:32.739755] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.739759] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10dd6d0) on tqpair=0x109e270 00:19:48.916 [2024-07-11 07:12:32.739765] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:48.916 [2024-07-11 07:12:32.739772] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:48.916 [2024-07-11 07:12:32.739779] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.739783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.739802] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109e270) 00:19:48.916 [2024-07-11 07:12:32.739824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.916 [2024-07-11 07:12:32.739849] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dd6d0, cid 0, qid 0 00:19:48.916 [2024-07-11 07:12:32.739922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.916 [2024-07-11 07:12:32.739928] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.916 [2024-07-11 07:12:32.739931] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.739935] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10dd6d0) on tqpair=0x109e270 00:19:48.916 [2024-07-11 07:12:32.739941] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:48.916 [2024-07-11 07:12:32.739949] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:48.916 [2024-07-11 07:12:32.739957] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.739960] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.739964] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109e270) 00:19:48.916 [2024-07-11 07:12:32.739971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.916 [2024-07-11 07:12:32.739988] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dd6d0, cid 0, qid 0 00:19:48.916 [2024-07-11 07:12:32.740058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.916 [2024-07-11 07:12:32.740064] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.916 [2024-07-11 07:12:32.740068] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740071] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10dd6d0) on tqpair=0x109e270 00:19:48.916 [2024-07-11 07:12:32.740078] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:48.916 [2024-07-11 07:12:32.740087] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740091] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740095] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109e270) 00:19:48.916 [2024-07-11 07:12:32.740101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.916 [2024-07-11 07:12:32.740118] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dd6d0, cid 0, qid 0 00:19:48.916 [2024-07-11 07:12:32.740183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.916 [2024-07-11 07:12:32.740190] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.916 [2024-07-11 07:12:32.740194] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740197] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10dd6d0) on tqpair=0x109e270 00:19:48.916 [2024-07-11 07:12:32.740203] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:48.916 [2024-07-11 07:12:32.740222] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:48.916 [2024-07-11 07:12:32.740245] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:48.916 [2024-07-11 07:12:32.740350] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:48.916 [2024-07-11 07:12:32.740355] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:48.916 [2024-07-11 07:12:32.740363] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740367] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740370] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109e270) 00:19:48.916 [2024-07-11 07:12:32.740377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.916 [2024-07-11 07:12:32.740395] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dd6d0, cid 0, qid 0 00:19:48.916 [2024-07-11 07:12:32.740460] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.916 [2024-07-11 07:12:32.740467] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.916 [2024-07-11 07:12:32.740470] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740474] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10dd6d0) on tqpair=0x109e270 00:19:48.916 [2024-07-11 07:12:32.740480] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:48.916 [2024-07-11 07:12:32.740489] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740493] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740496] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109e270) 00:19:48.916 [2024-07-11 07:12:32.740503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.916 [2024-07-11 07:12:32.740520] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dd6d0, cid 0, qid 0 00:19:48.916 [2024-07-11 07:12:32.740601] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.916 [2024-07-11 07:12:32.740608] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.916 [2024-07-11 07:12:32.740612] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740615] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10dd6d0) on tqpair=0x109e270 00:19:48.916 [2024-07-11 07:12:32.740621] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:48.916 [2024-07-11 07:12:32.740626] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:48.916 [2024-07-11 07:12:32.740648] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:48.916 [2024-07-11 07:12:32.740670] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:48.916 [2024-07-11 07:12:32.740680] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740684] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740688] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109e270) 00:19:48.916 [2024-07-11 07:12:32.740695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.916 [2024-07-11 07:12:32.740715] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dd6d0, cid 0, qid 0 00:19:48.916 [2024-07-11 07:12:32.740840] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:48.916 [2024-07-11 07:12:32.740846] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:48.916 [2024-07-11 07:12:32.740850] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740854] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x109e270): datao=0, datal=4096, cccid=0 00:19:48.916 [2024-07-11 07:12:32.740858] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10dd6d0) on tqpair(0x109e270): expected_datao=0, payload_size=4096 00:19:48.916 [2024-07-11 07:12:32.740867] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740871] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.916 [2024-07-11 07:12:32.740884] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.916 [2024-07-11 07:12:32.740888] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740891] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10dd6d0) on tqpair=0x109e270 00:19:48.916 [2024-07-11 07:12:32.740900] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:48.916 [2024-07-11 07:12:32.740905] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:48.916 [2024-07-11 07:12:32.740909] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:48.916 [2024-07-11 07:12:32.740914] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:48.916 [2024-07-11 07:12:32.740919] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:48.916 [2024-07-11 07:12:32.740924] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:48.916 [2024-07-11 07:12:32.740936] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:48.916 [2024-07-11 07:12:32.740944] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740947] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.916 [2024-07-11 07:12:32.740951] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109e270) 00:19:48.916 [2024-07-11 07:12:32.740958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:48.916 [2024-07-11 07:12:32.740977] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dd6d0, cid 0, qid 0 00:19:48.916 [2024-07-11 07:12:32.741056] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.916 [2024-07-11 07:12:32.741062] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.916 [2024-07-11 07:12:32.741066] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741069] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10dd6d0) on tqpair=0x109e270 00:19:48.917 [2024-07-11 07:12:32.741078] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741082] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741085] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109e270) 00:19:48.917 [2024-07-11 07:12:32.741091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.917 [2024-07-11 07:12:32.741097] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741100] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741103] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x109e270) 00:19:48.917 [2024-07-11 07:12:32.741109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.917 [2024-07-11 07:12:32.741114] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741118] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741121] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x109e270) 00:19:48.917 [2024-07-11 07:12:32.741126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.917 [2024-07-11 07:12:32.741132] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741135] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741138] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.917 [2024-07-11 07:12:32.741143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.917 [2024-07-11 07:12:32.741148] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:48.917 [2024-07-11 07:12:32.741160] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:48.917 [2024-07-11 07:12:32.741167] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741171] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741174] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x109e270) 00:19:48.917 [2024-07-11 07:12:32.741180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.917 [2024-07-11 07:12:32.741200] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dd6d0, cid 0, qid 0 00:19:48.917 [2024-07-11 07:12:32.741206] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dd830, cid 1, qid 0 00:19:48.917 [2024-07-11 07:12:32.741211] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dd990, cid 2, qid 0 00:19:48.917 [2024-07-11 07:12:32.741215] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.917 [2024-07-11 07:12:32.741219] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddc50, cid 4, qid 0 00:19:48.917 [2024-07-11 07:12:32.741327] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.917 [2024-07-11 07:12:32.741333] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.917 [2024-07-11 07:12:32.741336] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741340] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddc50) on tqpair=0x109e270 00:19:48.917 [2024-07-11 07:12:32.741346] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:48.917 [2024-07-11 07:12:32.741352] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:48.917 [2024-07-11 07:12:32.741363] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741368] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741371] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x109e270) 00:19:48.917 [2024-07-11 07:12:32.741377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.917 [2024-07-11 07:12:32.741395] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddc50, cid 4, qid 0 00:19:48.917 [2024-07-11 07:12:32.741461] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:48.917 [2024-07-11 07:12:32.741468] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:48.917 [2024-07-11 07:12:32.741471] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741488] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x109e270): datao=0, datal=4096, cccid=4 00:19:48.917 [2024-07-11 07:12:32.741507] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ddc50) on tqpair(0x109e270): expected_datao=0, payload_size=4096 00:19:48.917 [2024-07-11 07:12:32.741530] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741534] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741541] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.917 [2024-07-11 07:12:32.741547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.917 [2024-07-11 07:12:32.741550] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741554] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddc50) on tqpair=0x109e270 00:19:48.917 [2024-07-11 07:12:32.741568] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:48.917 [2024-07-11 07:12:32.741591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741597] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741600] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x109e270) 00:19:48.917 [2024-07-11 07:12:32.741607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.917 [2024-07-11 07:12:32.741614] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741617] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741620] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x109e270) 00:19:48.917 [2024-07-11 07:12:32.741626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.917 [2024-07-11 07:12:32.741651] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddc50, cid 4, qid 0 00:19:48.917 [2024-07-11 07:12:32.741659] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10dddb0, cid 5, qid 0 00:19:48.917 [2024-07-11 07:12:32.741789] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:48.917 [2024-07-11 07:12:32.741796] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:48.917 [2024-07-11 07:12:32.741799] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741803] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x109e270): datao=0, datal=1024, cccid=4 00:19:48.917 [2024-07-11 07:12:32.741807] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ddc50) on tqpair(0x109e270): expected_datao=0, payload_size=1024 00:19:48.917 [2024-07-11 07:12:32.741814] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741818] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741823] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.917 [2024-07-11 07:12:32.741828] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.917 [2024-07-11 07:12:32.741832] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.741835] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10dddb0) on tqpair=0x109e270 00:19:48.917 [2024-07-11 07:12:32.787505] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.917 [2024-07-11 07:12:32.787523] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.917 [2024-07-11 07:12:32.787544] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.787548] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddc50) on tqpair=0x109e270 00:19:48.917 [2024-07-11 07:12:32.787563] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.787568] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.787571] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x109e270) 00:19:48.917 [2024-07-11 07:12:32.787584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.917 [2024-07-11 07:12:32.787614] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddc50, cid 4, qid 0 00:19:48.917 [2024-07-11 07:12:32.787699] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:48.917 [2024-07-11 07:12:32.787705] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:48.917 [2024-07-11 07:12:32.787709] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.787712] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x109e270): datao=0, datal=3072, cccid=4 00:19:48.917 [2024-07-11 07:12:32.787716] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ddc50) on tqpair(0x109e270): expected_datao=0, payload_size=3072 00:19:48.917 [2024-07-11 07:12:32.787723] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.787727] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.787734] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.917 [2024-07-11 07:12:32.787739] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.917 [2024-07-11 07:12:32.787742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.787745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddc50) on tqpair=0x109e270 00:19:48.917 [2024-07-11 07:12:32.787756] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.787760] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.787763] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x109e270) 00:19:48.917 [2024-07-11 07:12:32.787769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.917 [2024-07-11 07:12:32.787824] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddc50, cid 4, qid 0 00:19:48.917 [2024-07-11 07:12:32.787900] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:48.917 [2024-07-11 07:12:32.787907] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:48.917 [2024-07-11 07:12:32.787910] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.787914] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x109e270): datao=0, datal=8, cccid=4 00:19:48.917 [2024-07-11 07:12:32.787918] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ddc50) on tqpair(0x109e270): expected_datao=0, payload_size=8 00:19:48.917 [2024-07-11 07:12:32.787924] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:48.917 [2024-07-11 07:12:32.787928] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:48.917 ===================================================== 00:19:48.917 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:48.917 ===================================================== 00:19:48.917 Controller Capabilities/Features 00:19:48.917 ================================ 00:19:48.917 Vendor ID: 0000 00:19:48.917 Subsystem Vendor ID: 0000 00:19:48.917 Serial Number: .................... 00:19:48.917 Model Number: ........................................ 00:19:48.917 Firmware Version: 24.01.1 00:19:48.917 Recommended Arb Burst: 0 00:19:48.917 IEEE OUI Identifier: 00 00 00 00:19:48.917 Multi-path I/O 00:19:48.918 May have multiple subsystem ports: No 00:19:48.918 May have multiple controllers: No 00:19:48.918 Associated with SR-IOV VF: No 00:19:48.918 Max Data Transfer Size: 131072 00:19:48.918 Max Number of Namespaces: 0 00:19:48.918 Max Number of I/O Queues: 1024 00:19:48.918 NVMe Specification Version (VS): 1.3 00:19:48.918 NVMe Specification Version (Identify): 1.3 00:19:48.918 Maximum Queue Entries: 128 00:19:48.918 Contiguous Queues Required: Yes 00:19:48.918 Arbitration Mechanisms Supported 00:19:48.918 Weighted Round Robin: Not Supported 00:19:48.918 Vendor Specific: Not Supported 00:19:48.918 Reset Timeout: 15000 ms 00:19:48.918 Doorbell Stride: 4 bytes 00:19:48.918 NVM Subsystem Reset: Not Supported 00:19:48.918 Command Sets Supported 00:19:48.918 NVM Command Set: Supported 00:19:48.918 Boot Partition: Not Supported 00:19:48.918 Memory Page Size Minimum: 4096 bytes 00:19:48.918 Memory Page Size Maximum: 4096 bytes 00:19:48.918 Persistent Memory Region: Not Supported 00:19:48.918 Optional Asynchronous Events Supported 00:19:48.918 Namespace Attribute Notices: Not Supported 00:19:48.918 Firmware Activation Notices: Not Supported 00:19:48.918 ANA Change Notices: Not Supported 00:19:48.918 PLE Aggregate Log Change Notices: Not Supported 00:19:48.918 LBA Status Info Alert Notices: Not Supported 00:19:48.918 EGE Aggregate Log Change Notices: Not Supported 00:19:48.918 Normal NVM Subsystem Shutdown event: Not Supported 00:19:48.918 Zone Descriptor Change Notices: Not Supported 00:19:48.918 Discovery Log Change Notices: Supported 00:19:48.918 Controller Attributes 00:19:48.918 128-bit Host Identifier: Not Supported 00:19:48.918 Non-Operational Permissive Mode: Not Supported 00:19:48.918 NVM Sets: Not Supported 00:19:48.918 Read Recovery Levels: Not Supported 00:19:48.918 Endurance Groups: Not Supported 00:19:48.918 Predictable Latency Mode: Not Supported 00:19:48.918 Traffic Based Keep ALive: Not Supported 00:19:48.918 Namespace Granularity: Not Supported 00:19:48.918 SQ Associations: Not Supported 00:19:48.918 UUID List: Not Supported 00:19:48.918 Multi-Domain Subsystem: Not Supported 00:19:48.918 Fixed Capacity Management: Not Supported 00:19:48.918 Variable Capacity Management: Not Supported 00:19:48.918 Delete Endurance Group: Not Supported 00:19:48.918 Delete NVM Set: Not Supported 00:19:48.918 Extended LBA Formats Supported: Not Supported 00:19:48.918 Flexible Data Placement Supported: Not Supported 00:19:48.918 00:19:48.918 Controller Memory Buffer Support 00:19:48.918 ================================ 00:19:48.918 Supported: No 00:19:48.918 00:19:48.918 Persistent Memory Region Support 00:19:48.918 ================================ 00:19:48.918 Supported: No 00:19:48.918 00:19:48.918 Admin Command Set Attributes 00:19:48.918 ============================ 00:19:48.918 Security Send/Receive: Not Supported 00:19:48.918 Format NVM: Not Supported 00:19:48.918 Firmware Activate/Download: Not Supported 00:19:48.918 Namespace Management: Not Supported 00:19:48.918 Device Self-Test: Not Supported 00:19:48.918 Directives: Not Supported 00:19:48.918 NVMe-MI: Not Supported 00:19:48.918 Virtualization Management: Not Supported 00:19:48.918 Doorbell Buffer Config: Not Supported 00:19:48.918 Get LBA Status Capability: Not Supported 00:19:48.918 Command & Feature Lockdown Capability: Not Supported 00:19:48.918 Abort Command Limit: 1 00:19:48.918 Async Event Request Limit: 4 00:19:48.918 Number of Firmware Slots: N/A 00:19:48.918 Firmware Slot 1 Read-Only: N/A 00:19:48.918 Fi[2024-07-11 07:12:32.829593] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.918 [2024-07-11 07:12:32.829614] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.918 [2024-07-11 07:12:32.829636] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.918 [2024-07-11 07:12:32.829640] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddc50) on tqpair=0x109e270 00:19:48.918 rmware Activation Without Reset: N/A 00:19:48.918 Multiple Update Detection Support: N/A 00:19:48.918 Firmware Update Granularity: No Information Provided 00:19:48.918 Per-Namespace SMART Log: No 00:19:48.918 Asymmetric Namespace Access Log Page: Not Supported 00:19:48.918 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:48.918 Command Effects Log Page: Not Supported 00:19:48.918 Get Log Page Extended Data: Supported 00:19:48.918 Telemetry Log Pages: Not Supported 00:19:48.918 Persistent Event Log Pages: Not Supported 00:19:48.918 Supported Log Pages Log Page: May Support 00:19:48.918 Commands Supported & Effects Log Page: Not Supported 00:19:48.918 Feature Identifiers & Effects Log Page:May Support 00:19:48.918 NVMe-MI Commands & Effects Log Page: May Support 00:19:48.918 Data Area 4 for Telemetry Log: Not Supported 00:19:48.918 Error Log Page Entries Supported: 128 00:19:48.918 Keep Alive: Not Supported 00:19:48.918 00:19:48.918 NVM Command Set Attributes 00:19:48.918 ========================== 00:19:48.918 Submission Queue Entry Size 00:19:48.918 Max: 1 00:19:48.918 Min: 1 00:19:48.918 Completion Queue Entry Size 00:19:48.918 Max: 1 00:19:48.918 Min: 1 00:19:48.918 Number of Namespaces: 0 00:19:48.918 Compare Command: Not Supported 00:19:48.918 Write Uncorrectable Command: Not Supported 00:19:48.918 Dataset Management Command: Not Supported 00:19:48.918 Write Zeroes Command: Not Supported 00:19:48.918 Set Features Save Field: Not Supported 00:19:48.918 Reservations: Not Supported 00:19:48.918 Timestamp: Not Supported 00:19:48.918 Copy: Not Supported 00:19:48.918 Volatile Write Cache: Not Present 00:19:48.918 Atomic Write Unit (Normal): 1 00:19:48.918 Atomic Write Unit (PFail): 1 00:19:48.918 Atomic Compare & Write Unit: 1 00:19:48.918 Fused Compare & Write: Supported 00:19:48.918 Scatter-Gather List 00:19:48.918 SGL Command Set: Supported 00:19:48.918 SGL Keyed: Supported 00:19:48.918 SGL Bit Bucket Descriptor: Not Supported 00:19:48.918 SGL Metadata Pointer: Not Supported 00:19:48.918 Oversized SGL: Not Supported 00:19:48.918 SGL Metadata Address: Not Supported 00:19:48.918 SGL Offset: Supported 00:19:48.918 Transport SGL Data Block: Not Supported 00:19:48.918 Replay Protected Memory Block: Not Supported 00:19:48.918 00:19:48.918 Firmware Slot Information 00:19:48.918 ========================= 00:19:48.918 Active slot: 0 00:19:48.918 00:19:48.918 00:19:48.918 Error Log 00:19:48.918 ========= 00:19:48.918 00:19:48.918 Active Namespaces 00:19:48.918 ================= 00:19:48.918 Discovery Log Page 00:19:48.918 ================== 00:19:48.918 Generation Counter: 2 00:19:48.918 Number of Records: 2 00:19:48.918 Record Format: 0 00:19:48.918 00:19:48.918 Discovery Log Entry 0 00:19:48.918 ---------------------- 00:19:48.918 Transport Type: 3 (TCP) 00:19:48.918 Address Family: 1 (IPv4) 00:19:48.918 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:48.918 Entry Flags: 00:19:48.918 Duplicate Returned Information: 1 00:19:48.918 Explicit Persistent Connection Support for Discovery: 1 00:19:48.918 Transport Requirements: 00:19:48.918 Secure Channel: Not Required 00:19:48.918 Port ID: 0 (0x0000) 00:19:48.918 Controller ID: 65535 (0xffff) 00:19:48.918 Admin Max SQ Size: 128 00:19:48.918 Transport Service Identifier: 4420 00:19:48.918 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:48.918 Transport Address: 10.0.0.2 00:19:48.918 Discovery Log Entry 1 00:19:48.918 ---------------------- 00:19:48.918 Transport Type: 3 (TCP) 00:19:48.918 Address Family: 1 (IPv4) 00:19:48.918 Subsystem Type: 2 (NVM Subsystem) 00:19:48.918 Entry Flags: 00:19:48.918 Duplicate Returned Information: 0 00:19:48.918 Explicit Persistent Connection Support for Discovery: 0 00:19:48.918 Transport Requirements: 00:19:48.918 Secure Channel: Not Required 00:19:48.918 Port ID: 0 (0x0000) 00:19:48.918 Controller ID: 65535 (0xffff) 00:19:48.918 Admin Max SQ Size: 128 00:19:48.918 Transport Service Identifier: 4420 00:19:48.918 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:48.918 Transport Address: 10.0.0.2 [2024-07-11 07:12:32.829735] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:48.918 [2024-07-11 07:12:32.829751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.918 [2024-07-11 07:12:32.829758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.918 [2024-07-11 07:12:32.829764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.918 [2024-07-11 07:12:32.829769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.918 [2024-07-11 07:12:32.829779] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.918 [2024-07-11 07:12:32.829783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.918 [2024-07-11 07:12:32.829786] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.918 [2024-07-11 07:12:32.829794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.918 [2024-07-11 07:12:32.829820] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.918 [2024-07-11 07:12:32.829895] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.918 [2024-07-11 07:12:32.829901] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.918 [2024-07-11 07:12:32.829904] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.918 [2024-07-11 07:12:32.829908] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.919 [2024-07-11 07:12:32.829916] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.829919] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.829938] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.919 [2024-07-11 07:12:32.829945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.919 [2024-07-11 07:12:32.829967] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.919 [2024-07-11 07:12:32.830045] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.919 [2024-07-11 07:12:32.830051] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.919 [2024-07-11 07:12:32.830055] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830058] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.919 [2024-07-11 07:12:32.830064] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:48.919 [2024-07-11 07:12:32.830068] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:48.919 [2024-07-11 07:12:32.830078] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830082] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830085] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.919 [2024-07-11 07:12:32.830092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.919 [2024-07-11 07:12:32.830108] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.919 [2024-07-11 07:12:32.830165] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.919 [2024-07-11 07:12:32.830171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.919 [2024-07-11 07:12:32.830174] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830178] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.919 [2024-07-11 07:12:32.830189] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830193] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830196] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.919 [2024-07-11 07:12:32.830202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.919 [2024-07-11 07:12:32.830219] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.919 [2024-07-11 07:12:32.830327] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.919 [2024-07-11 07:12:32.830335] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.919 [2024-07-11 07:12:32.830339] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830343] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.919 [2024-07-11 07:12:32.830354] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830373] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830377] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.919 [2024-07-11 07:12:32.830384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.919 [2024-07-11 07:12:32.830402] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.919 [2024-07-11 07:12:32.830479] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.919 [2024-07-11 07:12:32.830486] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.919 [2024-07-11 07:12:32.830490] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830493] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.919 [2024-07-11 07:12:32.830518] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830524] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830527] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.919 [2024-07-11 07:12:32.830534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.919 [2024-07-11 07:12:32.830554] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.919 [2024-07-11 07:12:32.830643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.919 [2024-07-11 07:12:32.830649] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.919 [2024-07-11 07:12:32.830652] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830656] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.919 [2024-07-11 07:12:32.830667] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830671] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830674] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.919 [2024-07-11 07:12:32.830681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.919 [2024-07-11 07:12:32.830698] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.919 [2024-07-11 07:12:32.830769] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.919 [2024-07-11 07:12:32.830775] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.919 [2024-07-11 07:12:32.830779] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.919 [2024-07-11 07:12:32.830792] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830796] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830800] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.919 [2024-07-11 07:12:32.830806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.919 [2024-07-11 07:12:32.830823] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.919 [2024-07-11 07:12:32.830878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.919 [2024-07-11 07:12:32.830899] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.919 [2024-07-11 07:12:32.830902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.919 [2024-07-11 07:12:32.830931] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830935] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.830939] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.919 [2024-07-11 07:12:32.830945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.919 [2024-07-11 07:12:32.830962] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.919 [2024-07-11 07:12:32.831021] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.919 [2024-07-11 07:12:32.831027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.919 [2024-07-11 07:12:32.831031] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831034] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.919 [2024-07-11 07:12:32.831044] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831048] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831052] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.919 [2024-07-11 07:12:32.831058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.919 [2024-07-11 07:12:32.831074] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.919 [2024-07-11 07:12:32.831126] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.919 [2024-07-11 07:12:32.831132] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.919 [2024-07-11 07:12:32.831135] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831139] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.919 [2024-07-11 07:12:32.831149] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831153] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831156] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.919 [2024-07-11 07:12:32.831163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.919 [2024-07-11 07:12:32.831179] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.919 [2024-07-11 07:12:32.831230] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.919 [2024-07-11 07:12:32.831236] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.919 [2024-07-11 07:12:32.831240] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831243] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.919 [2024-07-11 07:12:32.831253] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831257] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831260] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.919 [2024-07-11 07:12:32.831267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.919 [2024-07-11 07:12:32.831283] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.919 [2024-07-11 07:12:32.831344] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.919 [2024-07-11 07:12:32.831350] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.919 [2024-07-11 07:12:32.831353] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831357] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.919 [2024-07-11 07:12:32.831367] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831371] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831374] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.919 [2024-07-11 07:12:32.831381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.919 [2024-07-11 07:12:32.831397] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.919 [2024-07-11 07:12:32.831456] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.919 [2024-07-11 07:12:32.831462] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.919 [2024-07-11 07:12:32.831465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831469] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.919 [2024-07-11 07:12:32.831479] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831483] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.919 [2024-07-11 07:12:32.831486] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.920 [2024-07-11 07:12:32.831493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.920 [2024-07-11 07:12:32.835513] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.920 [2024-07-11 07:12:32.835552] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.920 [2024-07-11 07:12:32.835559] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.920 [2024-07-11 07:12:32.835563] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.920 [2024-07-11 07:12:32.835567] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.920 [2024-07-11 07:12:32.835580] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.920 [2024-07-11 07:12:32.835585] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.920 [2024-07-11 07:12:32.835588] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109e270) 00:19:48.920 [2024-07-11 07:12:32.835596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.920 [2024-07-11 07:12:32.835619] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ddaf0, cid 3, qid 0 00:19:48.920 [2024-07-11 07:12:32.835679] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.920 [2024-07-11 07:12:32.835685] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.920 [2024-07-11 07:12:32.835688] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.920 [2024-07-11 07:12:32.835692] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ddaf0) on tqpair=0x109e270 00:19:48.920 [2024-07-11 07:12:32.835700] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:19:48.920 00:19:48.920 07:12:32 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:48.920 [2024-07-11 07:12:32.871211] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:48.920 [2024-07-11 07:12:32.871273] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82151 ] 00:19:49.183 [2024-07-11 07:12:33.008278] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:49.183 [2024-07-11 07:12:33.008341] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:49.183 [2024-07-11 07:12:33.008348] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:49.183 [2024-07-11 07:12:33.008357] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:49.183 [2024-07-11 07:12:33.008364] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:49.183 [2024-07-11 07:12:33.008447] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:49.183 [2024-07-11 07:12:33.008520] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1cff270 0 00:19:49.183 [2024-07-11 07:12:33.015492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:49.183 [2024-07-11 07:12:33.015513] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:49.183 [2024-07-11 07:12:33.015534] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:49.183 [2024-07-11 07:12:33.015538] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:49.183 [2024-07-11 07:12:33.015572] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.015578] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.015582] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cff270) 00:19:49.183 [2024-07-11 07:12:33.015591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:49.183 [2024-07-11 07:12:33.015622] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3e6d0, cid 0, qid 0 00:19:49.183 [2024-07-11 07:12:33.023518] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.183 [2024-07-11 07:12:33.023538] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.183 [2024-07-11 07:12:33.023559] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.023563] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3e6d0) on tqpair=0x1cff270 00:19:49.183 [2024-07-11 07:12:33.023573] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:49.183 [2024-07-11 07:12:33.023580] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:49.183 [2024-07-11 07:12:33.023587] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:49.183 [2024-07-11 07:12:33.023600] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.023605] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.023608] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cff270) 00:19:49.183 [2024-07-11 07:12:33.023616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.183 [2024-07-11 07:12:33.023646] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3e6d0, cid 0, qid 0 00:19:49.183 [2024-07-11 07:12:33.023718] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.183 [2024-07-11 07:12:33.023725] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.183 [2024-07-11 07:12:33.023728] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.023732] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3e6d0) on tqpair=0x1cff270 00:19:49.183 [2024-07-11 07:12:33.023738] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:49.183 [2024-07-11 07:12:33.023745] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:49.183 [2024-07-11 07:12:33.023753] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.023756] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.023760] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cff270) 00:19:49.183 [2024-07-11 07:12:33.023766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.183 [2024-07-11 07:12:33.023818] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3e6d0, cid 0, qid 0 00:19:49.183 [2024-07-11 07:12:33.023890] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.183 [2024-07-11 07:12:33.023897] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.183 [2024-07-11 07:12:33.023900] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.023904] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3e6d0) on tqpair=0x1cff270 00:19:49.183 [2024-07-11 07:12:33.023910] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:49.183 [2024-07-11 07:12:33.023918] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:49.183 [2024-07-11 07:12:33.023928] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.023932] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.023936] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cff270) 00:19:49.183 [2024-07-11 07:12:33.023942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.183 [2024-07-11 07:12:33.023961] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3e6d0, cid 0, qid 0 00:19:49.183 [2024-07-11 07:12:33.024019] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.183 [2024-07-11 07:12:33.024025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.183 [2024-07-11 07:12:33.024029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.024032] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3e6d0) on tqpair=0x1cff270 00:19:49.183 [2024-07-11 07:12:33.024038] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:49.183 [2024-07-11 07:12:33.024048] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.024052] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.024055] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cff270) 00:19:49.183 [2024-07-11 07:12:33.024062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.183 [2024-07-11 07:12:33.024080] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3e6d0, cid 0, qid 0 00:19:49.183 [2024-07-11 07:12:33.024144] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.183 [2024-07-11 07:12:33.024150] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.183 [2024-07-11 07:12:33.024153] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.024157] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3e6d0) on tqpair=0x1cff270 00:19:49.183 [2024-07-11 07:12:33.024163] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:49.183 [2024-07-11 07:12:33.024167] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:49.183 [2024-07-11 07:12:33.024175] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:49.183 [2024-07-11 07:12:33.024280] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:49.183 [2024-07-11 07:12:33.024284] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:49.183 [2024-07-11 07:12:33.024291] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.024295] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.024299] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cff270) 00:19:49.183 [2024-07-11 07:12:33.024305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.183 [2024-07-11 07:12:33.024325] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3e6d0, cid 0, qid 0 00:19:49.183 [2024-07-11 07:12:33.024387] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.183 [2024-07-11 07:12:33.024393] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.183 [2024-07-11 07:12:33.024397] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.024401] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3e6d0) on tqpair=0x1cff270 00:19:49.183 [2024-07-11 07:12:33.024406] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:49.183 [2024-07-11 07:12:33.024416] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.024420] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.024423] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cff270) 00:19:49.183 [2024-07-11 07:12:33.024430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.183 [2024-07-11 07:12:33.024449] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3e6d0, cid 0, qid 0 00:19:49.183 [2024-07-11 07:12:33.024537] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.183 [2024-07-11 07:12:33.024545] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.183 [2024-07-11 07:12:33.024549] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.183 [2024-07-11 07:12:33.024553] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3e6d0) on tqpair=0x1cff270 00:19:49.183 [2024-07-11 07:12:33.024558] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:49.183 [2024-07-11 07:12:33.024563] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:49.183 [2024-07-11 07:12:33.024571] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:49.184 [2024-07-11 07:12:33.024584] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:49.184 [2024-07-11 07:12:33.024595] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.024599] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.024603] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cff270) 00:19:49.184 [2024-07-11 07:12:33.024610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.184 [2024-07-11 07:12:33.024633] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3e6d0, cid 0, qid 0 00:19:49.184 [2024-07-11 07:12:33.024748] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.184 [2024-07-11 07:12:33.024755] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.184 [2024-07-11 07:12:33.024758] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.024762] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cff270): datao=0, datal=4096, cccid=0 00:19:49.184 [2024-07-11 07:12:33.024767] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d3e6d0) on tqpair(0x1cff270): expected_datao=0, payload_size=4096 00:19:49.184 [2024-07-11 07:12:33.024774] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.024779] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.024787] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.184 [2024-07-11 07:12:33.024793] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.184 [2024-07-11 07:12:33.024796] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.024800] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3e6d0) on tqpair=0x1cff270 00:19:49.184 [2024-07-11 07:12:33.024823] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:49.184 [2024-07-11 07:12:33.024829] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:49.184 [2024-07-11 07:12:33.024833] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:49.184 [2024-07-11 07:12:33.024837] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:49.184 [2024-07-11 07:12:33.024841] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:49.184 [2024-07-11 07:12:33.024846] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:49.184 [2024-07-11 07:12:33.024858] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:49.184 [2024-07-11 07:12:33.024865] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.024869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.024873] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cff270) 00:19:49.184 [2024-07-11 07:12:33.024894] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.184 [2024-07-11 07:12:33.024930] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3e6d0, cid 0, qid 0 00:19:49.184 [2024-07-11 07:12:33.025002] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.184 [2024-07-11 07:12:33.025008] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.184 [2024-07-11 07:12:33.025011] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025015] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3e6d0) on tqpair=0x1cff270 00:19:49.184 [2024-07-11 07:12:33.025023] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025026] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025030] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cff270) 00:19:49.184 [2024-07-11 07:12:33.025036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.184 [2024-07-11 07:12:33.025042] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025045] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025049] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1cff270) 00:19:49.184 [2024-07-11 07:12:33.025054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.184 [2024-07-11 07:12:33.025059] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025063] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025066] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1cff270) 00:19:49.184 [2024-07-11 07:12:33.025071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.184 [2024-07-11 07:12:33.025077] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025080] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025084] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.184 [2024-07-11 07:12:33.025089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.184 [2024-07-11 07:12:33.025093] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:49.184 [2024-07-11 07:12:33.025106] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:49.184 [2024-07-11 07:12:33.025113] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025117] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025120] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cff270) 00:19:49.184 [2024-07-11 07:12:33.025126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.184 [2024-07-11 07:12:33.025147] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3e6d0, cid 0, qid 0 00:19:49.184 [2024-07-11 07:12:33.025154] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3e830, cid 1, qid 0 00:19:49.184 [2024-07-11 07:12:33.025159] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3e990, cid 2, qid 0 00:19:49.184 [2024-07-11 07:12:33.025163] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.184 [2024-07-11 07:12:33.025168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3ec50, cid 4, qid 0 00:19:49.184 [2024-07-11 07:12:33.025260] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.184 [2024-07-11 07:12:33.025267] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.184 [2024-07-11 07:12:33.025270] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025274] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3ec50) on tqpair=0x1cff270 00:19:49.184 [2024-07-11 07:12:33.025279] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:49.184 [2024-07-11 07:12:33.025284] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:49.184 [2024-07-11 07:12:33.025292] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:49.184 [2024-07-11 07:12:33.025302] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:49.184 [2024-07-11 07:12:33.025309] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025312] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025316] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cff270) 00:19:49.184 [2024-07-11 07:12:33.025322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.184 [2024-07-11 07:12:33.025342] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3ec50, cid 4, qid 0 00:19:49.184 [2024-07-11 07:12:33.025405] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.184 [2024-07-11 07:12:33.025411] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.184 [2024-07-11 07:12:33.025414] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025418] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3ec50) on tqpair=0x1cff270 00:19:49.184 [2024-07-11 07:12:33.025491] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:49.184 [2024-07-11 07:12:33.025515] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:49.184 [2024-07-11 07:12:33.025525] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025530] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025533] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cff270) 00:19:49.184 [2024-07-11 07:12:33.025540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.184 [2024-07-11 07:12:33.025563] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3ec50, cid 4, qid 0 00:19:49.184 [2024-07-11 07:12:33.025655] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.184 [2024-07-11 07:12:33.025662] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.184 [2024-07-11 07:12:33.025665] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025669] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cff270): datao=0, datal=4096, cccid=4 00:19:49.184 [2024-07-11 07:12:33.025673] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d3ec50) on tqpair(0x1cff270): expected_datao=0, payload_size=4096 00:19:49.184 [2024-07-11 07:12:33.025680] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025684] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.184 [2024-07-11 07:12:33.025714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.184 [2024-07-11 07:12:33.025717] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025721] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3ec50) on tqpair=0x1cff270 00:19:49.184 [2024-07-11 07:12:33.025738] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:49.184 [2024-07-11 07:12:33.025748] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:49.184 [2024-07-11 07:12:33.025759] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:49.184 [2024-07-11 07:12:33.025767] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025771] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025774] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cff270) 00:19:49.184 [2024-07-11 07:12:33.025781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.184 [2024-07-11 07:12:33.025803] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3ec50, cid 4, qid 0 00:19:49.184 [2024-07-11 07:12:33.025903] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.184 [2024-07-11 07:12:33.025909] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.184 [2024-07-11 07:12:33.025913] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.184 [2024-07-11 07:12:33.025916] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cff270): datao=0, datal=4096, cccid=4 00:19:49.185 [2024-07-11 07:12:33.025920] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d3ec50) on tqpair(0x1cff270): expected_datao=0, payload_size=4096 00:19:49.185 [2024-07-11 07:12:33.025928] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.025931] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.025939] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.185 [2024-07-11 07:12:33.025945] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.185 [2024-07-11 07:12:33.025948] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.025952] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3ec50) on tqpair=0x1cff270 00:19:49.185 [2024-07-11 07:12:33.025968] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:49.185 [2024-07-11 07:12:33.025979] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:49.185 [2024-07-11 07:12:33.025987] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.025992] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.025995] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cff270) 00:19:49.185 [2024-07-11 07:12:33.026003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.185 [2024-07-11 07:12:33.026024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3ec50, cid 4, qid 0 00:19:49.185 [2024-07-11 07:12:33.026114] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.185 [2024-07-11 07:12:33.026122] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.185 [2024-07-11 07:12:33.026125] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026129] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cff270): datao=0, datal=4096, cccid=4 00:19:49.185 [2024-07-11 07:12:33.026133] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d3ec50) on tqpair(0x1cff270): expected_datao=0, payload_size=4096 00:19:49.185 [2024-07-11 07:12:33.026140] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026144] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026152] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.185 [2024-07-11 07:12:33.026158] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.185 [2024-07-11 07:12:33.026161] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026165] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3ec50) on tqpair=0x1cff270 00:19:49.185 [2024-07-11 07:12:33.026175] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:49.185 [2024-07-11 07:12:33.026184] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:49.185 [2024-07-11 07:12:33.026194] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:49.185 [2024-07-11 07:12:33.026200] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:49.185 [2024-07-11 07:12:33.026205] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:49.185 [2024-07-11 07:12:33.026210] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:49.185 [2024-07-11 07:12:33.026214] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:49.185 [2024-07-11 07:12:33.026219] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:49.185 [2024-07-11 07:12:33.026233] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026237] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026241] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cff270) 00:19:49.185 [2024-07-11 07:12:33.026248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.185 [2024-07-11 07:12:33.026254] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026258] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026261] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cff270) 00:19:49.185 [2024-07-11 07:12:33.026267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.185 [2024-07-11 07:12:33.026322] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3ec50, cid 4, qid 0 00:19:49.185 [2024-07-11 07:12:33.026331] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3edb0, cid 5, qid 0 00:19:49.185 [2024-07-11 07:12:33.026407] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.185 [2024-07-11 07:12:33.026414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.185 [2024-07-11 07:12:33.026418] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026422] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3ec50) on tqpair=0x1cff270 00:19:49.185 [2024-07-11 07:12:33.026431] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.185 [2024-07-11 07:12:33.026437] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.185 [2024-07-11 07:12:33.026441] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026445] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3edb0) on tqpair=0x1cff270 00:19:49.185 [2024-07-11 07:12:33.026457] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026474] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026478] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cff270) 00:19:49.185 [2024-07-11 07:12:33.026485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.185 [2024-07-11 07:12:33.026523] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3edb0, cid 5, qid 0 00:19:49.185 [2024-07-11 07:12:33.026593] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.185 [2024-07-11 07:12:33.026600] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.185 [2024-07-11 07:12:33.026603] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3edb0) on tqpair=0x1cff270 00:19:49.185 [2024-07-11 07:12:33.026618] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026622] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026626] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cff270) 00:19:49.185 [2024-07-11 07:12:33.026632] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.185 [2024-07-11 07:12:33.026651] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3edb0, cid 5, qid 0 00:19:49.185 [2024-07-11 07:12:33.026730] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.185 [2024-07-11 07:12:33.026737] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.185 [2024-07-11 07:12:33.026740] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026744] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3edb0) on tqpair=0x1cff270 00:19:49.185 [2024-07-11 07:12:33.026754] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026762] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cff270) 00:19:49.185 [2024-07-11 07:12:33.026783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.185 [2024-07-11 07:12:33.026801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3edb0, cid 5, qid 0 00:19:49.185 [2024-07-11 07:12:33.026860] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.185 [2024-07-11 07:12:33.026866] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.185 [2024-07-11 07:12:33.026870] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026873] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3edb0) on tqpair=0x1cff270 00:19:49.185 [2024-07-11 07:12:33.026887] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026892] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026896] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cff270) 00:19:49.185 [2024-07-11 07:12:33.026902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.185 [2024-07-11 07:12:33.026910] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026914] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026917] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cff270) 00:19:49.185 [2024-07-11 07:12:33.026923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.185 [2024-07-11 07:12:33.026930] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026934] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026937] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1cff270) 00:19:49.185 [2024-07-11 07:12:33.026943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.185 [2024-07-11 07:12:33.026950] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026954] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.185 [2024-07-11 07:12:33.026958] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1cff270) 00:19:49.185 [2024-07-11 07:12:33.026964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.185 [2024-07-11 07:12:33.026985] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3edb0, cid 5, qid 0 00:19:49.185 [2024-07-11 07:12:33.026992] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3ec50, cid 4, qid 0 00:19:49.185 [2024-07-11 07:12:33.026997] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3ef10, cid 6, qid 0 00:19:49.185 [2024-07-11 07:12:33.027002] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3f070, cid 7, qid 0 00:19:49.185 ===================================================== 00:19:49.185 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:49.185 ===================================================== 00:19:49.185 Controller Capabilities/Features 00:19:49.185 ================================ 00:19:49.185 Vendor ID: 8086 00:19:49.185 Subsystem Vendor ID: 8086 00:19:49.185 Serial Number: SPDK00000000000001 00:19:49.185 Model Number: SPDK bdev Controller 00:19:49.185 Firmware Version: 24.01.1 00:19:49.185 Recommended Arb Burst: 6 00:19:49.185 IEEE OUI Identifier: e4 d2 5c 00:19:49.185 Multi-path I/O 00:19:49.185 May have multiple subsystem ports: Yes 00:19:49.185 May have multiple controllers: Yes 00:19:49.185 Associated with SR-IOV VF: No 00:19:49.185 Max Data Transfer Size: 131072 00:19:49.185 Max Number of Namespaces: 32 00:19:49.185 Max Number of I/O Queues: 127 00:19:49.186 NVMe Specification Version (VS): 1.3 00:19:49.186 NVMe Specification Version (Identify): 1.3 00:19:49.186 Maximum Queue Entries: 128 00:19:49.186 Contiguous Queues Required: Yes 00:19:49.186 Arbitration Mechanisms Supported 00:19:49.186 Weighted Round Robin: Not Supported 00:19:49.186 Vendor Specific: Not Supported 00:19:49.186 Reset Timeout: 15000 ms 00:19:49.186 Doorbell Stride: 4 bytes 00:19:49.186 NVM Subsystem Reset: Not Supported 00:19:49.186 Command Sets Supported 00:19:49.186 NVM Command Set: Supported 00:19:49.186 Boot Partition: Not Supported 00:19:49.186 Memory Page Size Minimum: 4096 bytes 00:19:49.186 Memory Page Size Maximum: 4096 bytes 00:19:49.186 Persistent Memory Region: Not Supported 00:19:49.186 Optional Asynchronous Events Supported 00:19:49.186 Namespace Attribute Notices: Supported 00:19:49.186 Firmware Activation Notices: Not Supported 00:19:49.186 ANA Change Notices: Not Supported 00:19:49.186 PLE Aggregate Log Change Notices: Not Supported 00:19:49.186 LBA Status Info Alert Notices: Not Supported 00:19:49.186 EGE Aggregate Log Change Notices: Not Supported 00:19:49.186 Normal NVM Subsystem Shutdown event: Not Supported 00:19:49.186 Zone Descriptor Change Notices: Not Supported 00:19:49.186 Discovery Log Change Notices: Not Supported 00:19:49.186 Controller Attributes 00:19:49.186 128-bit Host Identifier: Supported 00:19:49.186 Non-Operational Permissive Mode: Not Supported 00:19:49.186 NVM Sets: Not Supported 00:19:49.186 Read Recovery Levels: Not Supported 00:19:49.186 Endurance Groups: Not Supported 00:19:49.186 Predictable Latency Mode: Not Supported 00:19:49.186 Traffic Based Keep ALive: Not Supported 00:19:49.186 Namespace Granularity: Not Supported 00:19:49.186 SQ Associations: Not Supported 00:19:49.186 UUID List: Not Supported 00:19:49.186 Multi-Domain Subsystem: Not Supported 00:19:49.186 Fixed Capacity Management: Not Supported 00:19:49.186 Variable Capacity Management: Not Supported 00:19:49.186 Delete Endurance Group: Not Supported 00:19:49.186 Delete NVM Set: Not Supported 00:19:49.186 Extended LBA Formats Supported: Not Supported 00:19:49.186 Flexible Data Placement Supported: Not Supported 00:19:49.186 00:19:49.186 Controller Memory Buffer Support 00:19:49.186 ================================ 00:19:49.186 Supported: No 00:19:49.186 00:19:49.186 Persistent Memory Region Support 00:19:49.186 ================================ 00:19:49.186 Supported: No 00:19:49.186 00:19:49.186 Admin Command Set Attributes 00:19:49.186 ============================ 00:19:49.186 Security Send/Receive: Not Supported 00:19:49.186 Format NVM: Not Supported 00:19:49.186 Firmware Activate/Download: Not Supported 00:19:49.186 Namespace Management: Not Supported 00:19:49.186 Device Self-Test: Not Supported 00:19:49.186 Directives: Not Supported 00:19:49.186 NVMe-MI: Not Supported 00:19:49.186 Virtualization Management: Not Supported 00:19:49.186 Doorbell Buffer Config: Not Supported 00:19:49.186 Get LBA Status Capability: Not Supported 00:19:49.186 Command & Feature Lockdown Capability: Not Supported 00:19:49.186 Abort Command Limit: 4 00:19:49.186 Async Event Request Limit: 4 00:19:49.186 Number of Firmware Slots: N/A 00:19:49.186 Firmware Slot 1 Read-Only: N/A 00:19:49.186 Firmware Activation Without Reset: [2024-07-11 07:12:33.027132] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.186 [2024-07-11 07:12:33.027139] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.186 [2024-07-11 07:12:33.027142] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027146] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cff270): datao=0, datal=8192, cccid=5 00:19:49.186 [2024-07-11 07:12:33.027150] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d3edb0) on tqpair(0x1cff270): expected_datao=0, payload_size=8192 00:19:49.186 [2024-07-11 07:12:33.027167] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027172] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027178] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.186 [2024-07-11 07:12:33.027183] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.186 [2024-07-11 07:12:33.027186] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027190] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cff270): datao=0, datal=512, cccid=4 00:19:49.186 [2024-07-11 07:12:33.027194] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d3ec50) on tqpair(0x1cff270): expected_datao=0, payload_size=512 00:19:49.186 [2024-07-11 07:12:33.027200] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027204] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027209] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.186 [2024-07-11 07:12:33.027214] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.186 [2024-07-11 07:12:33.027217] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027220] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cff270): datao=0, datal=512, cccid=6 00:19:49.186 [2024-07-11 07:12:33.027224] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d3ef10) on tqpair(0x1cff270): expected_datao=0, payload_size=512 00:19:49.186 [2024-07-11 07:12:33.027231] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027234] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027239] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.186 [2024-07-11 07:12:33.027244] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.186 [2024-07-11 07:12:33.027247] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027251] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cff270): datao=0, datal=4096, cccid=7 00:19:49.186 [2024-07-11 07:12:33.027255] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d3f070) on tqpair(0x1cff270): expected_datao=0, payload_size=4096 00:19:49.186 [2024-07-11 07:12:33.027261] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027265] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027273] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.186 [2024-07-11 07:12:33.027279] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.186 [2024-07-11 07:12:33.027282] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027285] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3edb0) on tqpair=0x1cff270 00:19:49.186 [2024-07-11 07:12:33.027301] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.186 [2024-07-11 07:12:33.027308] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.186 [2024-07-11 07:12:33.027311] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027315] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3ec50) on tqpair=0x1cff270 00:19:49.186 [2024-07-11 07:12:33.027325] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.186 [2024-07-11 07:12:33.027331] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.186 [2024-07-11 07:12:33.027335] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027339] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3ef10) on tqpair=0x1cff270 00:19:49.186 [2024-07-11 07:12:33.027347] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.186 [2024-07-11 07:12:33.027353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.186 [2024-07-11 07:12:33.027356] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.186 [2024-07-11 07:12:33.027360] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3f070) on tqpair=0x1cff270 00:19:49.186 N/A 00:19:49.186 Multiple Update Detection Support: N/A 00:19:49.186 Firmware Update Granularity: No Information Provided 00:19:49.186 Per-Namespace SMART Log: No 00:19:49.186 Asymmetric Namespace Access Log Page: Not Supported 00:19:49.186 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:49.186 Command Effects Log Page: Supported 00:19:49.186 Get Log Page Extended Data: Supported 00:19:49.186 Telemetry Log Pages: Not Supported 00:19:49.186 Persistent Event Log Pages: Not Supported 00:19:49.186 Supported Log Pages Log Page: May Support 00:19:49.186 Commands Supported & Effects Log Page: Not Supported 00:19:49.186 Feature Identifiers & Effects Log Page:May Support 00:19:49.186 NVMe-MI Commands & Effects Log Page: May Support 00:19:49.186 Data Area 4 for Telemetry Log: Not Supported 00:19:49.186 Error Log Page Entries Supported: 128 00:19:49.186 Keep Alive: Supported 00:19:49.186 Keep Alive Granularity: 10000 ms 00:19:49.186 00:19:49.186 NVM Command Set Attributes 00:19:49.186 ========================== 00:19:49.186 Submission Queue Entry Size 00:19:49.186 Max: 64 00:19:49.186 Min: 64 00:19:49.186 Completion Queue Entry Size 00:19:49.186 Max: 16 00:19:49.186 Min: 16 00:19:49.186 Number of Namespaces: 32 00:19:49.186 Compare Command: Supported 00:19:49.186 Write Uncorrectable Command: Not Supported 00:19:49.186 Dataset Management Command: Supported 00:19:49.186 Write Zeroes Command: Supported 00:19:49.186 Set Features Save Field: Not Supported 00:19:49.186 Reservations: Supported 00:19:49.186 Timestamp: Not Supported 00:19:49.186 Copy: Supported 00:19:49.186 Volatile Write Cache: Present 00:19:49.186 Atomic Write Unit (Normal): 1 00:19:49.186 Atomic Write Unit (PFail): 1 00:19:49.186 Atomic Compare & Write Unit: 1 00:19:49.186 Fused Compare & Write: Supported 00:19:49.186 Scatter-Gather List 00:19:49.186 SGL Command Set: Supported 00:19:49.186 SGL Keyed: Supported 00:19:49.186 SGL Bit Bucket Descriptor: Not Supported 00:19:49.186 SGL Metadata Pointer: Not Supported 00:19:49.186 Oversized SGL: Not Supported 00:19:49.186 SGL Metadata Address: Not Supported 00:19:49.186 SGL Offset: Supported 00:19:49.186 Transport SGL Data Block: Not Supported 00:19:49.186 Replay Protected Memory Block: Not Supported 00:19:49.186 00:19:49.186 Firmware Slot Information 00:19:49.186 ========================= 00:19:49.186 Active slot: 1 00:19:49.186 Slot 1 Firmware Revision: 24.01.1 00:19:49.186 00:19:49.186 00:19:49.186 Commands Supported and Effects 00:19:49.186 ============================== 00:19:49.186 Admin Commands 00:19:49.186 -------------- 00:19:49.186 Get Log Page (02h): Supported 00:19:49.187 Identify (06h): Supported 00:19:49.187 Abort (08h): Supported 00:19:49.187 Set Features (09h): Supported 00:19:49.187 Get Features (0Ah): Supported 00:19:49.187 Asynchronous Event Request (0Ch): Supported 00:19:49.187 Keep Alive (18h): Supported 00:19:49.187 I/O Commands 00:19:49.187 ------------ 00:19:49.187 Flush (00h): Supported LBA-Change 00:19:49.187 Write (01h): Supported LBA-Change 00:19:49.187 Read (02h): Supported 00:19:49.187 Compare (05h): Supported 00:19:49.187 Write Zeroes (08h): Supported LBA-Change 00:19:49.187 Dataset Management (09h): Supported LBA-Change 00:19:49.187 Copy (19h): Supported LBA-Change 00:19:49.187 Unknown (79h): Supported LBA-Change 00:19:49.187 Unknown (7Ah): Supported 00:19:49.187 00:19:49.187 Error Log 00:19:49.187 ========= 00:19:49.187 00:19:49.187 Arbitration 00:19:49.187 =========== 00:19:49.187 Arbitration Burst: 1 00:19:49.187 00:19:49.187 Power Management 00:19:49.187 ================ 00:19:49.187 Number of Power States: 1 00:19:49.187 Current Power State: Power State #0 00:19:49.187 Power State #0: 00:19:49.187 Max Power: 0.00 W 00:19:49.187 Non-Operational State: Operational 00:19:49.187 Entry Latency: Not Reported 00:19:49.187 Exit Latency: Not Reported 00:19:49.187 Relative Read Throughput: 0 00:19:49.187 Relative Read Latency: 0 00:19:49.187 Relative Write Throughput: 0 00:19:49.187 Relative Write Latency: 0 00:19:49.187 Idle Power: Not Reported 00:19:49.187 Active Power: Not Reported 00:19:49.187 Non-Operational Permissive Mode: Not Supported 00:19:49.187 00:19:49.187 Health Information 00:19:49.187 ================== 00:19:49.187 Critical Warnings: 00:19:49.187 Available Spare Space: OK 00:19:49.187 Temperature: OK 00:19:49.187 Device Reliability: OK 00:19:49.187 Read Only: No 00:19:49.187 Volatile Memory Backup: OK 00:19:49.187 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:49.187 Temperature Threshold: [2024-07-11 07:12:33.027472] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.187 [2024-07-11 07:12:33.027480] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.187 [2024-07-11 07:12:33.027484] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1cff270) 00:19:49.187 [2024-07-11 07:12:33.027507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.187 [2024-07-11 07:12:33.031510] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3f070, cid 7, qid 0 00:19:49.187 [2024-07-11 07:12:33.031553] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.187 [2024-07-11 07:12:33.031561] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.187 [2024-07-11 07:12:33.031565] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.187 [2024-07-11 07:12:33.031569] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3f070) on tqpair=0x1cff270 00:19:49.187 [2024-07-11 07:12:33.031605] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:49.187 [2024-07-11 07:12:33.031619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.187 [2024-07-11 07:12:33.031627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.187 [2024-07-11 07:12:33.031632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.187 [2024-07-11 07:12:33.031638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.188 [2024-07-11 07:12:33.031647] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.031651] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.031655] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.188 [2024-07-11 07:12:33.031662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.188 [2024-07-11 07:12:33.031689] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.188 [2024-07-11 07:12:33.031771] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.188 [2024-07-11 07:12:33.031777] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.188 [2024-07-11 07:12:33.031781] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.031785] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.188 [2024-07-11 07:12:33.031793] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.031797] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.031801] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.188 [2024-07-11 07:12:33.031808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.188 [2024-07-11 07:12:33.031831] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.188 [2024-07-11 07:12:33.031913] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.188 [2024-07-11 07:12:33.031919] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.188 [2024-07-11 07:12:33.031923] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.031926] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.188 [2024-07-11 07:12:33.031932] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:49.188 [2024-07-11 07:12:33.031936] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:49.188 [2024-07-11 07:12:33.031945] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.031950] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.031953] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.188 [2024-07-11 07:12:33.031960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.188 [2024-07-11 07:12:33.031978] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.188 [2024-07-11 07:12:33.032045] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.188 [2024-07-11 07:12:33.032051] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.188 [2024-07-11 07:12:33.032055] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.032058] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.188 [2024-07-11 07:12:33.032069] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.032076] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.032080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.188 [2024-07-11 07:12:33.032086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.188 [2024-07-11 07:12:33.032104] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.188 [2024-07-11 07:12:33.032161] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.188 [2024-07-11 07:12:33.032167] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.188 [2024-07-11 07:12:33.032170] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.032174] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.188 [2024-07-11 07:12:33.032184] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.032189] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.188 [2024-07-11 07:12:33.032192] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.188 [2024-07-11 07:12:33.032199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.188 [2024-07-11 07:12:33.032216] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.189 [2024-07-11 07:12:33.032280] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.189 [2024-07-11 07:12:33.032286] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.189 [2024-07-11 07:12:33.032290] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032293] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.189 [2024-07-11 07:12:33.032304] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032308] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032311] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.189 [2024-07-11 07:12:33.032318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.189 [2024-07-11 07:12:33.032336] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.189 [2024-07-11 07:12:33.032392] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.189 [2024-07-11 07:12:33.032398] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.189 [2024-07-11 07:12:33.032402] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032405] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.189 [2024-07-11 07:12:33.032416] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032420] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032423] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.189 [2024-07-11 07:12:33.032430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.189 [2024-07-11 07:12:33.032448] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.189 [2024-07-11 07:12:33.032530] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.189 [2024-07-11 07:12:33.032538] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.189 [2024-07-11 07:12:33.032542] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032545] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.189 [2024-07-11 07:12:33.032556] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032561] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032564] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.189 [2024-07-11 07:12:33.032571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.189 [2024-07-11 07:12:33.032592] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.189 [2024-07-11 07:12:33.032650] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.189 [2024-07-11 07:12:33.032656] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.189 [2024-07-11 07:12:33.032659] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032663] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.189 [2024-07-11 07:12:33.032673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032677] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032681] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.189 [2024-07-11 07:12:33.032688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.189 [2024-07-11 07:12:33.032706] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.189 [2024-07-11 07:12:33.032774] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.189 [2024-07-11 07:12:33.032780] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.189 [2024-07-11 07:12:33.032783] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032787] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.189 [2024-07-11 07:12:33.032797] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032801] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032805] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.189 [2024-07-11 07:12:33.032811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.189 [2024-07-11 07:12:33.032829] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.189 [2024-07-11 07:12:33.032887] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.189 [2024-07-11 07:12:33.032893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.189 [2024-07-11 07:12:33.032897] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032900] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.189 [2024-07-11 07:12:33.032910] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032915] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.032918] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.189 [2024-07-11 07:12:33.032924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.189 [2024-07-11 07:12:33.032942] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.189 [2024-07-11 07:12:33.033007] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.189 [2024-07-11 07:12:33.033013] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.189 [2024-07-11 07:12:33.033017] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033020] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.189 [2024-07-11 07:12:33.033031] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033038] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.189 [2024-07-11 07:12:33.033045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.189 [2024-07-11 07:12:33.033063] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.189 [2024-07-11 07:12:33.033151] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.189 [2024-07-11 07:12:33.033157] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.189 [2024-07-11 07:12:33.033160] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033164] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.189 [2024-07-11 07:12:33.033175] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033179] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033182] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.189 [2024-07-11 07:12:33.033189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.189 [2024-07-11 07:12:33.033206] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.189 [2024-07-11 07:12:33.033291] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.189 [2024-07-11 07:12:33.033298] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.189 [2024-07-11 07:12:33.033301] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033305] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.189 [2024-07-11 07:12:33.033315] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033320] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033323] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.189 [2024-07-11 07:12:33.033330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.189 [2024-07-11 07:12:33.033348] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.189 [2024-07-11 07:12:33.033414] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.189 [2024-07-11 07:12:33.033421] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.189 [2024-07-11 07:12:33.033424] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.189 [2024-07-11 07:12:33.033438] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033443] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033446] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.189 [2024-07-11 07:12:33.033453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.189 [2024-07-11 07:12:33.033473] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.189 [2024-07-11 07:12:33.033536] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.189 [2024-07-11 07:12:33.033545] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.189 [2024-07-11 07:12:33.033548] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033552] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.189 [2024-07-11 07:12:33.033563] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033568] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033571] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.189 [2024-07-11 07:12:33.033578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.189 [2024-07-11 07:12:33.033599] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.189 [2024-07-11 07:12:33.033671] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.189 [2024-07-11 07:12:33.033677] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.189 [2024-07-11 07:12:33.033681] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033685] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.189 [2024-07-11 07:12:33.033696] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033700] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033704] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.189 [2024-07-11 07:12:33.033711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.189 [2024-07-11 07:12:33.033730] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.189 [2024-07-11 07:12:33.033796] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.189 [2024-07-11 07:12:33.033803] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.189 [2024-07-11 07:12:33.033806] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033811] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.189 [2024-07-11 07:12:33.033822] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033826] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.189 [2024-07-11 07:12:33.033830] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.190 [2024-07-11 07:12:33.033837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.190 [2024-07-11 07:12:33.033855] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.190 [2024-07-11 07:12:33.033923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.190 [2024-07-11 07:12:33.033929] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.190 [2024-07-11 07:12:33.033933] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.033937] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.190 [2024-07-11 07:12:33.033963] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.033967] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.033971] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.190 [2024-07-11 07:12:33.033977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.190 [2024-07-11 07:12:33.033996] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.190 [2024-07-11 07:12:33.034057] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.190 [2024-07-11 07:12:33.034063] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.190 [2024-07-11 07:12:33.034067] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034071] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.190 [2024-07-11 07:12:33.034081] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034086] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034089] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.190 [2024-07-11 07:12:33.034096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.190 [2024-07-11 07:12:33.034115] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.190 [2024-07-11 07:12:33.034171] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.190 [2024-07-11 07:12:33.034177] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.190 [2024-07-11 07:12:33.034181] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034201] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.190 [2024-07-11 07:12:33.034212] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034217] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034220] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.190 [2024-07-11 07:12:33.034227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.190 [2024-07-11 07:12:33.034246] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.190 [2024-07-11 07:12:33.034338] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.190 [2024-07-11 07:12:33.034347] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.190 [2024-07-11 07:12:33.034351] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034355] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.190 [2024-07-11 07:12:33.034367] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034372] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034376] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.190 [2024-07-11 07:12:33.034383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.190 [2024-07-11 07:12:33.034405] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.190 [2024-07-11 07:12:33.034477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.190 [2024-07-11 07:12:33.034486] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.190 [2024-07-11 07:12:33.034490] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034495] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.190 [2024-07-11 07:12:33.034507] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034512] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034516] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.190 [2024-07-11 07:12:33.034523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.190 [2024-07-11 07:12:33.034545] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.190 [2024-07-11 07:12:33.034610] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.190 [2024-07-11 07:12:33.034617] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.190 [2024-07-11 07:12:33.034621] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034625] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.190 [2024-07-11 07:12:33.034637] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034641] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034645] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.190 [2024-07-11 07:12:33.034653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.190 [2024-07-11 07:12:33.034673] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.190 [2024-07-11 07:12:33.034731] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.190 [2024-07-11 07:12:33.034738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.190 [2024-07-11 07:12:33.034741] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.190 [2024-07-11 07:12:33.034757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034761] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.190 [2024-07-11 07:12:33.034787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.190 [2024-07-11 07:12:33.034805] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.190 [2024-07-11 07:12:33.034864] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.190 [2024-07-11 07:12:33.034871] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.190 [2024-07-11 07:12:33.034875] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034879] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.190 [2024-07-11 07:12:33.034889] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034894] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.034898] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.190 [2024-07-11 07:12:33.034905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.190 [2024-07-11 07:12:33.034939] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.190 [2024-07-11 07:12:33.035003] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.190 [2024-07-11 07:12:33.035009] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.190 [2024-07-11 07:12:33.035013] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.035017] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.190 [2024-07-11 07:12:33.035027] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.035032] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.035035] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.190 [2024-07-11 07:12:33.035042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.190 [2024-07-11 07:12:33.035060] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.190 [2024-07-11 07:12:33.035123] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.190 [2024-07-11 07:12:33.035129] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.190 [2024-07-11 07:12:33.035132] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.035136] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.190 [2024-07-11 07:12:33.035147] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.035151] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.035154] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.190 [2024-07-11 07:12:33.035161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.190 [2024-07-11 07:12:33.035179] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.190 [2024-07-11 07:12:33.035237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.190 [2024-07-11 07:12:33.035244] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.190 [2024-07-11 07:12:33.035247] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.035251] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.190 [2024-07-11 07:12:33.035261] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.035266] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.035269] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.190 [2024-07-11 07:12:33.035276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.190 [2024-07-11 07:12:33.035294] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.190 [2024-07-11 07:12:33.035352] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.190 [2024-07-11 07:12:33.035358] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.190 [2024-07-11 07:12:33.035362] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.035365] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.190 [2024-07-11 07:12:33.035376] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.035380] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.035384] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.190 [2024-07-11 07:12:33.035391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.190 [2024-07-11 07:12:33.035409] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.190 [2024-07-11 07:12:33.035491] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.190 [2024-07-11 07:12:33.035498] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.190 [2024-07-11 07:12:33.039551] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.039562] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.190 [2024-07-11 07:12:33.039579] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.190 [2024-07-11 07:12:33.039584] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.191 [2024-07-11 07:12:33.039588] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cff270) 00:19:49.191 [2024-07-11 07:12:33.039596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.191 [2024-07-11 07:12:33.039626] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3eaf0, cid 3, qid 0 00:19:49.191 [2024-07-11 07:12:33.039702] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.191 [2024-07-11 07:12:33.039710] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.191 [2024-07-11 07:12:33.039714] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.191 [2024-07-11 07:12:33.039718] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3eaf0) on tqpair=0x1cff270 00:19:49.191 [2024-07-11 07:12:33.039727] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:19:49.191 0 Kelvin (-273 Celsius) 00:19:49.191 Available Spare: 0% 00:19:49.191 Available Spare Threshold: 0% 00:19:49.191 Life Percentage Used: 0% 00:19:49.191 Data Units Read: 0 00:19:49.191 Data Units Written: 0 00:19:49.191 Host Read Commands: 0 00:19:49.191 Host Write Commands: 0 00:19:49.191 Controller Busy Time: 0 minutes 00:19:49.191 Power Cycles: 0 00:19:49.191 Power On Hours: 0 hours 00:19:49.191 Unsafe Shutdowns: 0 00:19:49.191 Unrecoverable Media Errors: 0 00:19:49.191 Lifetime Error Log Entries: 0 00:19:49.191 Warning Temperature Time: 0 minutes 00:19:49.191 Critical Temperature Time: 0 minutes 00:19:49.191 00:19:49.191 Number of Queues 00:19:49.191 ================ 00:19:49.191 Number of I/O Submission Queues: 127 00:19:49.191 Number of I/O Completion Queues: 127 00:19:49.191 00:19:49.191 Active Namespaces 00:19:49.191 ================= 00:19:49.191 Namespace ID:1 00:19:49.191 Error Recovery Timeout: Unlimited 00:19:49.191 Command Set Identifier: NVM (00h) 00:19:49.191 Deallocate: Supported 00:19:49.191 Deallocated/Unwritten Error: Not Supported 00:19:49.191 Deallocated Read Value: Unknown 00:19:49.191 Deallocate in Write Zeroes: Not Supported 00:19:49.191 Deallocated Guard Field: 0xFFFF 00:19:49.191 Flush: Supported 00:19:49.191 Reservation: Supported 00:19:49.191 Namespace Sharing Capabilities: Multiple Controllers 00:19:49.191 Size (in LBAs): 131072 (0GiB) 00:19:49.191 Capacity (in LBAs): 131072 (0GiB) 00:19:49.191 Utilization (in LBAs): 131072 (0GiB) 00:19:49.191 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:49.191 EUI64: ABCDEF0123456789 00:19:49.191 UUID: 17302d9a-7e71-49a8-847c-f531207251e6 00:19:49.191 Thin Provisioning: Not Supported 00:19:49.191 Per-NS Atomic Units: Yes 00:19:49.191 Atomic Boundary Size (Normal): 0 00:19:49.191 Atomic Boundary Size (PFail): 0 00:19:49.191 Atomic Boundary Offset: 0 00:19:49.191 Maximum Single Source Range Length: 65535 00:19:49.191 Maximum Copy Length: 65535 00:19:49.191 Maximum Source Range Count: 1 00:19:49.191 NGUID/EUI64 Never Reused: No 00:19:49.191 Namespace Write Protected: No 00:19:49.191 Number of LBA Formats: 1 00:19:49.191 Current LBA Format: LBA Format #00 00:19:49.191 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:49.191 00:19:49.191 07:12:33 -- host/identify.sh@51 -- # sync 00:19:49.191 07:12:33 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:49.191 07:12:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.191 07:12:33 -- common/autotest_common.sh@10 -- # set +x 00:19:49.191 07:12:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.191 07:12:33 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:49.191 07:12:33 -- host/identify.sh@56 -- # nvmftestfini 00:19:49.191 07:12:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:49.191 07:12:33 -- nvmf/common.sh@116 -- # sync 00:19:49.191 07:12:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:49.191 07:12:33 -- nvmf/common.sh@119 -- # set +e 00:19:49.191 07:12:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:49.191 07:12:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:49.191 rmmod nvme_tcp 00:19:49.191 rmmod nvme_fabrics 00:19:49.191 rmmod nvme_keyring 00:19:49.191 07:12:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:49.191 07:12:33 -- nvmf/common.sh@123 -- # set -e 00:19:49.191 07:12:33 -- nvmf/common.sh@124 -- # return 0 00:19:49.191 07:12:33 -- nvmf/common.sh@477 -- # '[' -n 82096 ']' 00:19:49.191 07:12:33 -- nvmf/common.sh@478 -- # killprocess 82096 00:19:49.191 07:12:33 -- common/autotest_common.sh@926 -- # '[' -z 82096 ']' 00:19:49.191 07:12:33 -- common/autotest_common.sh@930 -- # kill -0 82096 00:19:49.191 07:12:33 -- common/autotest_common.sh@931 -- # uname 00:19:49.191 07:12:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:49.191 07:12:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82096 00:19:49.191 killing process with pid 82096 00:19:49.191 07:12:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:49.191 07:12:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:49.191 07:12:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82096' 00:19:49.191 07:12:33 -- common/autotest_common.sh@945 -- # kill 82096 00:19:49.191 [2024-07-11 07:12:33.199121] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:49.191 07:12:33 -- common/autotest_common.sh@950 -- # wait 82096 00:19:49.450 07:12:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:49.450 07:12:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:49.450 07:12:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:49.450 07:12:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:49.450 07:12:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:49.450 07:12:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.450 07:12:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.450 07:12:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.713 07:12:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:49.713 00:19:49.713 real 0m2.611s 00:19:49.713 user 0m7.459s 00:19:49.713 sys 0m0.654s 00:19:49.713 07:12:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.713 07:12:33 -- common/autotest_common.sh@10 -- # set +x 00:19:49.713 ************************************ 00:19:49.713 END TEST nvmf_identify 00:19:49.713 ************************************ 00:19:49.713 07:12:33 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:49.713 07:12:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:49.713 07:12:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:49.713 07:12:33 -- common/autotest_common.sh@10 -- # set +x 00:19:49.713 ************************************ 00:19:49.713 START TEST nvmf_perf 00:19:49.713 ************************************ 00:19:49.713 07:12:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:49.713 * Looking for test storage... 00:19:49.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:49.713 07:12:33 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:49.713 07:12:33 -- nvmf/common.sh@7 -- # uname -s 00:19:49.713 07:12:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.713 07:12:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.713 07:12:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.713 07:12:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.713 07:12:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.713 07:12:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.713 07:12:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.713 07:12:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.713 07:12:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.713 07:12:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.713 07:12:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:19:49.713 07:12:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:19:49.713 07:12:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.713 07:12:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.713 07:12:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:49.713 07:12:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:49.713 07:12:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.713 07:12:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.713 07:12:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.713 07:12:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.713 07:12:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.713 07:12:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.713 07:12:33 -- paths/export.sh@5 -- # export PATH 00:19:49.713 07:12:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.713 07:12:33 -- nvmf/common.sh@46 -- # : 0 00:19:49.713 07:12:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:49.713 07:12:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:49.713 07:12:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:49.713 07:12:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.713 07:12:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.713 07:12:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:49.713 07:12:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:49.713 07:12:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:49.713 07:12:33 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:49.713 07:12:33 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:49.713 07:12:33 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:49.713 07:12:33 -- host/perf.sh@17 -- # nvmftestinit 00:19:49.713 07:12:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:49.713 07:12:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.713 07:12:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:49.713 07:12:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:49.713 07:12:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:49.713 07:12:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.713 07:12:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.713 07:12:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.713 07:12:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:49.713 07:12:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:49.713 07:12:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:49.713 07:12:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:49.713 07:12:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:49.713 07:12:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:49.713 07:12:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.713 07:12:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.713 07:12:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:49.713 07:12:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:49.713 07:12:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:49.713 07:12:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:49.713 07:12:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:49.713 07:12:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.713 07:12:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:49.713 07:12:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:49.713 07:12:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:49.713 07:12:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:49.713 07:12:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:49.713 07:12:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:49.713 Cannot find device "nvmf_tgt_br" 00:19:49.713 07:12:33 -- nvmf/common.sh@154 -- # true 00:19:49.713 07:12:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:49.713 Cannot find device "nvmf_tgt_br2" 00:19:49.713 07:12:33 -- nvmf/common.sh@155 -- # true 00:19:49.713 07:12:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:49.713 07:12:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:49.713 Cannot find device "nvmf_tgt_br" 00:19:49.713 07:12:33 -- nvmf/common.sh@157 -- # true 00:19:49.713 07:12:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:49.713 Cannot find device "nvmf_tgt_br2" 00:19:49.713 07:12:33 -- nvmf/common.sh@158 -- # true 00:19:49.713 07:12:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:49.970 07:12:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:49.970 07:12:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:49.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.970 07:12:33 -- nvmf/common.sh@161 -- # true 00:19:49.970 07:12:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:49.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.970 07:12:33 -- nvmf/common.sh@162 -- # true 00:19:49.970 07:12:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:49.970 07:12:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:49.970 07:12:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:49.970 07:12:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:49.970 07:12:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:49.970 07:12:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:49.970 07:12:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:49.970 07:12:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:49.970 07:12:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:49.970 07:12:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:49.970 07:12:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:49.970 07:12:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:49.970 07:12:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:49.970 07:12:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:49.970 07:12:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:49.970 07:12:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:49.970 07:12:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:49.970 07:12:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:49.970 07:12:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:49.970 07:12:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:49.970 07:12:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:49.970 07:12:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:50.228 07:12:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:50.228 07:12:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:50.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:19:50.228 00:19:50.228 --- 10.0.0.2 ping statistics --- 00:19:50.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.228 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:19:50.228 07:12:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:50.228 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:50.228 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:19:50.228 00:19:50.228 --- 10.0.0.3 ping statistics --- 00:19:50.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.228 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:50.228 07:12:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:50.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:50.228 00:19:50.228 --- 10.0.0.1 ping statistics --- 00:19:50.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.228 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:50.228 07:12:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.228 07:12:34 -- nvmf/common.sh@421 -- # return 0 00:19:50.228 07:12:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:50.228 07:12:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.228 07:12:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:50.228 07:12:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:50.228 07:12:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.228 07:12:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:50.228 07:12:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:50.228 07:12:34 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:50.228 07:12:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:50.228 07:12:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:50.228 07:12:34 -- common/autotest_common.sh@10 -- # set +x 00:19:50.228 07:12:34 -- nvmf/common.sh@469 -- # nvmfpid=82320 00:19:50.228 07:12:34 -- nvmf/common.sh@470 -- # waitforlisten 82320 00:19:50.228 07:12:34 -- common/autotest_common.sh@819 -- # '[' -z 82320 ']' 00:19:50.228 07:12:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.228 07:12:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:50.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.228 07:12:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:50.228 07:12:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.228 07:12:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:50.228 07:12:34 -- common/autotest_common.sh@10 -- # set +x 00:19:50.228 [2024-07-11 07:12:34.138086] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:50.229 [2024-07-11 07:12:34.138182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.229 [2024-07-11 07:12:34.274937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:50.486 [2024-07-11 07:12:34.351362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:50.486 [2024-07-11 07:12:34.351511] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.486 [2024-07-11 07:12:34.351525] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.486 [2024-07-11 07:12:34.351532] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.486 [2024-07-11 07:12:34.351655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.486 [2024-07-11 07:12:34.352103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.486 [2024-07-11 07:12:34.352239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:50.486 [2024-07-11 07:12:34.352270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.054 07:12:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:51.055 07:12:35 -- common/autotest_common.sh@852 -- # return 0 00:19:51.055 07:12:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:51.055 07:12:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:51.055 07:12:35 -- common/autotest_common.sh@10 -- # set +x 00:19:51.055 07:12:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.055 07:12:35 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:51.055 07:12:35 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:51.622 07:12:35 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:19:51.622 07:12:35 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:51.881 07:12:35 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:19:51.881 07:12:35 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:52.140 07:12:36 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:52.140 07:12:36 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:19:52.140 07:12:36 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:52.140 07:12:36 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:52.140 07:12:36 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:52.399 [2024-07-11 07:12:36.252393] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.399 07:12:36 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:52.657 07:12:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:52.657 07:12:36 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:52.916 07:12:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:52.916 07:12:36 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:52.916 07:12:36 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.175 [2024-07-11 07:12:37.166189] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.175 07:12:37 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:53.434 07:12:37 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:19:53.434 07:12:37 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:19:53.434 07:12:37 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:53.434 07:12:37 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:19:54.811 Initializing NVMe Controllers 00:19:54.811 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:19:54.811 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:19:54.811 Initialization complete. Launching workers. 00:19:54.811 ======================================================== 00:19:54.811 Latency(us) 00:19:54.811 Device Information : IOPS MiB/s Average min max 00:19:54.811 PCIE (0000:00:06.0) NSID 1 from core 0: 20439.95 79.84 1565.75 407.48 9007.33 00:19:54.811 ======================================================== 00:19:54.811 Total : 20439.95 79.84 1565.75 407.48 9007.33 00:19:54.811 00:19:54.811 07:12:38 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:56.185 Initializing NVMe Controllers 00:19:56.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:56.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:56.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:56.185 Initialization complete. Launching workers. 00:19:56.185 ======================================================== 00:19:56.185 Latency(us) 00:19:56.185 Device Information : IOPS MiB/s Average min max 00:19:56.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3662.96 14.31 272.77 99.60 7166.68 00:19:56.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.00 0.48 8245.57 6004.85 12025.03 00:19:56.185 ======================================================== 00:19:56.185 Total : 3784.95 14.78 529.76 99.60 12025.03 00:19:56.185 00:19:56.185 07:12:39 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:57.589 Initializing NVMe Controllers 00:19:57.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:57.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:57.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:57.589 Initialization complete. Launching workers. 00:19:57.589 ======================================================== 00:19:57.589 Latency(us) 00:19:57.589 Device Information : IOPS MiB/s Average min max 00:19:57.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10609.56 41.44 3015.46 499.98 8141.25 00:19:57.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2645.17 10.33 12227.89 5925.60 24088.66 00:19:57.589 ======================================================== 00:19:57.589 Total : 13254.73 51.78 4853.93 499.98 24088.66 00:19:57.589 00:19:57.589 07:12:41 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:19:57.589 07:12:41 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:00.122 Initializing NVMe Controllers 00:20:00.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:00.122 Controller IO queue size 128, less than required. 00:20:00.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:00.122 Controller IO queue size 128, less than required. 00:20:00.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:00.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:00.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:00.122 Initialization complete. Launching workers. 00:20:00.122 ======================================================== 00:20:00.122 Latency(us) 00:20:00.122 Device Information : IOPS MiB/s Average min max 00:20:00.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1693.33 423.33 76622.05 50079.78 126837.93 00:20:00.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 545.78 136.45 245650.26 138284.99 370741.22 00:20:00.122 ======================================================== 00:20:00.122 Total : 2239.12 559.78 117822.68 50079.78 370741.22 00:20:00.122 00:20:00.122 07:12:43 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:00.122 No valid NVMe controllers or AIO or URING devices found 00:20:00.122 Initializing NVMe Controllers 00:20:00.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:00.122 Controller IO queue size 128, less than required. 00:20:00.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:00.122 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:00.122 Controller IO queue size 128, less than required. 00:20:00.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:00.122 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:00.122 WARNING: Some requested NVMe devices were skipped 00:20:00.122 07:12:43 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:02.652 Initializing NVMe Controllers 00:20:02.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:02.652 Controller IO queue size 128, less than required. 00:20:02.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:02.652 Controller IO queue size 128, less than required. 00:20:02.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:02.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:02.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:02.652 Initialization complete. Launching workers. 00:20:02.652 00:20:02.652 ==================== 00:20:02.652 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:02.652 TCP transport: 00:20:02.652 polls: 8417 00:20:02.652 idle_polls: 5625 00:20:02.652 sock_completions: 2792 00:20:02.652 nvme_completions: 5252 00:20:02.652 submitted_requests: 8108 00:20:02.652 queued_requests: 1 00:20:02.652 00:20:02.652 ==================== 00:20:02.652 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:02.652 TCP transport: 00:20:02.652 polls: 8676 00:20:02.652 idle_polls: 5778 00:20:02.652 sock_completions: 2898 00:20:02.652 nvme_completions: 5494 00:20:02.652 submitted_requests: 8386 00:20:02.652 queued_requests: 1 00:20:02.652 ======================================================== 00:20:02.652 Latency(us) 00:20:02.652 Device Information : IOPS MiB/s Average min max 00:20:02.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1373.69 343.42 94875.29 60174.21 159119.64 00:20:02.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1434.07 358.52 90159.11 47671.93 126512.36 00:20:02.652 ======================================================== 00:20:02.652 Total : 2807.76 701.94 92466.49 47671.93 159119.64 00:20:02.652 00:20:02.652 07:12:46 -- host/perf.sh@66 -- # sync 00:20:02.652 07:12:46 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.911 07:12:46 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:02.911 07:12:46 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:02.911 07:12:46 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:03.170 07:12:47 -- host/perf.sh@72 -- # ls_guid=31208e5c-d5dd-46ab-a9b1-e363a7e0fa73 00:20:03.170 07:12:47 -- host/perf.sh@73 -- # get_lvs_free_mb 31208e5c-d5dd-46ab-a9b1-e363a7e0fa73 00:20:03.170 07:12:47 -- common/autotest_common.sh@1343 -- # local lvs_uuid=31208e5c-d5dd-46ab-a9b1-e363a7e0fa73 00:20:03.170 07:12:47 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:03.170 07:12:47 -- common/autotest_common.sh@1345 -- # local fc 00:20:03.170 07:12:47 -- common/autotest_common.sh@1346 -- # local cs 00:20:03.170 07:12:47 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:03.428 07:12:47 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:03.429 { 00:20:03.429 "base_bdev": "Nvme0n1", 00:20:03.429 "block_size": 4096, 00:20:03.429 "cluster_size": 4194304, 00:20:03.429 "free_clusters": 1278, 00:20:03.429 "name": "lvs_0", 00:20:03.429 "total_data_clusters": 1278, 00:20:03.429 "uuid": "31208e5c-d5dd-46ab-a9b1-e363a7e0fa73" 00:20:03.429 } 00:20:03.429 ]' 00:20:03.429 07:12:47 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="31208e5c-d5dd-46ab-a9b1-e363a7e0fa73") .free_clusters' 00:20:03.429 07:12:47 -- common/autotest_common.sh@1348 -- # fc=1278 00:20:03.429 07:12:47 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="31208e5c-d5dd-46ab-a9b1-e363a7e0fa73") .cluster_size' 00:20:03.429 5112 00:20:03.429 07:12:47 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:03.429 07:12:47 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:20:03.429 07:12:47 -- common/autotest_common.sh@1353 -- # echo 5112 00:20:03.429 07:12:47 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:03.429 07:12:47 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31208e5c-d5dd-46ab-a9b1-e363a7e0fa73 lbd_0 5112 00:20:03.687 07:12:47 -- host/perf.sh@80 -- # lb_guid=66eb61a0-99c2-4fc6-b920-19b70745199c 00:20:03.687 07:12:47 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 66eb61a0-99c2-4fc6-b920-19b70745199c lvs_n_0 00:20:03.946 07:12:47 -- host/perf.sh@83 -- # ls_nested_guid=22df24b7-acde-471c-9eec-bf87c379d27a 00:20:03.946 07:12:47 -- host/perf.sh@84 -- # get_lvs_free_mb 22df24b7-acde-471c-9eec-bf87c379d27a 00:20:03.946 07:12:47 -- common/autotest_common.sh@1343 -- # local lvs_uuid=22df24b7-acde-471c-9eec-bf87c379d27a 00:20:03.946 07:12:47 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:03.946 07:12:47 -- common/autotest_common.sh@1345 -- # local fc 00:20:03.946 07:12:47 -- common/autotest_common.sh@1346 -- # local cs 00:20:03.946 07:12:47 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:04.204 07:12:48 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:04.204 { 00:20:04.204 "base_bdev": "Nvme0n1", 00:20:04.204 "block_size": 4096, 00:20:04.204 "cluster_size": 4194304, 00:20:04.204 "free_clusters": 0, 00:20:04.204 "name": "lvs_0", 00:20:04.204 "total_data_clusters": 1278, 00:20:04.204 "uuid": "31208e5c-d5dd-46ab-a9b1-e363a7e0fa73" 00:20:04.204 }, 00:20:04.204 { 00:20:04.204 "base_bdev": "66eb61a0-99c2-4fc6-b920-19b70745199c", 00:20:04.204 "block_size": 4096, 00:20:04.204 "cluster_size": 4194304, 00:20:04.204 "free_clusters": 1276, 00:20:04.204 "name": "lvs_n_0", 00:20:04.204 "total_data_clusters": 1276, 00:20:04.204 "uuid": "22df24b7-acde-471c-9eec-bf87c379d27a" 00:20:04.204 } 00:20:04.204 ]' 00:20:04.205 07:12:48 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="22df24b7-acde-471c-9eec-bf87c379d27a") .free_clusters' 00:20:04.205 07:12:48 -- common/autotest_common.sh@1348 -- # fc=1276 00:20:04.205 07:12:48 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="22df24b7-acde-471c-9eec-bf87c379d27a") .cluster_size' 00:20:04.205 5104 00:20:04.205 07:12:48 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:04.205 07:12:48 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:20:04.205 07:12:48 -- common/autotest_common.sh@1353 -- # echo 5104 00:20:04.205 07:12:48 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:04.205 07:12:48 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 22df24b7-acde-471c-9eec-bf87c379d27a lbd_nest_0 5104 00:20:04.771 07:12:48 -- host/perf.sh@88 -- # lb_nested_guid=1a56fd65-c26f-4f88-8cde-8ac3106dcf5d 00:20:04.771 07:12:48 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.771 07:12:48 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:04.771 07:12:48 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 1a56fd65-c26f-4f88-8cde-8ac3106dcf5d 00:20:05.029 07:12:48 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.288 07:12:49 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:05.288 07:12:49 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:05.288 07:12:49 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:05.288 07:12:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:05.288 07:12:49 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:05.546 No valid NVMe controllers or AIO or URING devices found 00:20:05.546 Initializing NVMe Controllers 00:20:05.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.546 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:05.546 WARNING: Some requested NVMe devices were skipped 00:20:05.546 07:12:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:05.546 07:12:49 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:17.773 Initializing NVMe Controllers 00:20:17.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:17.773 Initialization complete. Launching workers. 00:20:17.773 ======================================================== 00:20:17.773 Latency(us) 00:20:17.773 Device Information : IOPS MiB/s Average min max 00:20:17.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 831.51 103.94 1201.82 392.57 8413.65 00:20:17.773 ======================================================== 00:20:17.773 Total : 831.51 103.94 1201.82 392.57 8413.65 00:20:17.773 00:20:17.773 07:12:59 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:17.773 07:12:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:17.773 07:12:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:17.773 No valid NVMe controllers or AIO or URING devices found 00:20:17.773 Initializing NVMe Controllers 00:20:17.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.773 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:17.774 WARNING: Some requested NVMe devices were skipped 00:20:17.774 07:12:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:17.774 07:12:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:27.752 [2024-07-11 07:13:10.227497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ee40 is same with the state(5) to be set 00:20:27.752 [2024-07-11 07:13:10.227574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ee40 is same with the state(5) to be set 00:20:27.752 [2024-07-11 07:13:10.227602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ee40 is same with the state(5) to be set 00:20:27.752 [2024-07-11 07:13:10.227611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ee40 is same with the state(5) to be set 00:20:27.752 [2024-07-11 07:13:10.227618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ee40 is same with the state(5) to be set 00:20:27.752 [2024-07-11 07:13:10.227626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176ee40 is same with the state(5) to be set 00:20:27.752 Initializing NVMe Controllers 00:20:27.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:27.752 Initialization complete. Launching workers. 00:20:27.752 ======================================================== 00:20:27.752 Latency(us) 00:20:27.752 Device Information : IOPS MiB/s Average min max 00:20:27.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1120.70 140.09 28594.71 8110.10 278448.42 00:20:27.752 ======================================================== 00:20:27.752 Total : 1120.70 140.09 28594.71 8110.10 278448.42 00:20:27.752 00:20:27.752 07:13:10 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:27.752 07:13:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:27.752 07:13:10 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:27.752 No valid NVMe controllers or AIO or URING devices found 00:20:27.752 Initializing NVMe Controllers 00:20:27.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.752 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:27.752 WARNING: Some requested NVMe devices were skipped 00:20:27.752 07:13:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:27.752 07:13:10 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:37.724 Initializing NVMe Controllers 00:20:37.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.724 Controller IO queue size 128, less than required. 00:20:37.724 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:37.724 Initialization complete. Launching workers. 00:20:37.724 ======================================================== 00:20:37.724 Latency(us) 00:20:37.724 Device Information : IOPS MiB/s Average min max 00:20:37.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4110.89 513.86 31144.38 8885.53 66086.48 00:20:37.724 ======================================================== 00:20:37.724 Total : 4110.89 513.86 31144.38 8885.53 66086.48 00:20:37.724 00:20:37.724 07:13:20 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.724 07:13:21 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1a56fd65-c26f-4f88-8cde-8ac3106dcf5d 00:20:37.724 07:13:21 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:37.724 07:13:21 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 66eb61a0-99c2-4fc6-b920-19b70745199c 00:20:37.983 07:13:21 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:38.242 07:13:22 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:38.242 07:13:22 -- host/perf.sh@114 -- # nvmftestfini 00:20:38.242 07:13:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:38.243 07:13:22 -- nvmf/common.sh@116 -- # sync 00:20:38.243 07:13:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:38.243 07:13:22 -- nvmf/common.sh@119 -- # set +e 00:20:38.243 07:13:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:38.243 07:13:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:38.243 rmmod nvme_tcp 00:20:38.243 rmmod nvme_fabrics 00:20:38.243 rmmod nvme_keyring 00:20:38.243 07:13:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:38.243 07:13:22 -- nvmf/common.sh@123 -- # set -e 00:20:38.243 07:13:22 -- nvmf/common.sh@124 -- # return 0 00:20:38.243 07:13:22 -- nvmf/common.sh@477 -- # '[' -n 82320 ']' 00:20:38.243 07:13:22 -- nvmf/common.sh@478 -- # killprocess 82320 00:20:38.243 07:13:22 -- common/autotest_common.sh@926 -- # '[' -z 82320 ']' 00:20:38.243 07:13:22 -- common/autotest_common.sh@930 -- # kill -0 82320 00:20:38.243 07:13:22 -- common/autotest_common.sh@931 -- # uname 00:20:38.243 07:13:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:38.243 07:13:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82320 00:20:38.243 killing process with pid 82320 00:20:38.243 07:13:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:38.243 07:13:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:38.243 07:13:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82320' 00:20:38.243 07:13:22 -- common/autotest_common.sh@945 -- # kill 82320 00:20:38.243 07:13:22 -- common/autotest_common.sh@950 -- # wait 82320 00:20:39.617 07:13:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:39.617 07:13:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:39.617 07:13:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:39.617 07:13:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.617 07:13:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:39.617 07:13:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.617 07:13:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.618 07:13:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.618 07:13:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:39.618 00:20:39.618 real 0m50.007s 00:20:39.618 user 3m8.991s 00:20:39.618 sys 0m10.242s 00:20:39.618 07:13:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.618 07:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:39.618 ************************************ 00:20:39.618 END TEST nvmf_perf 00:20:39.618 ************************************ 00:20:39.618 07:13:23 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:39.618 07:13:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:39.618 07:13:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:39.618 07:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:39.618 ************************************ 00:20:39.618 START TEST nvmf_fio_host 00:20:39.618 ************************************ 00:20:39.618 07:13:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:39.875 * Looking for test storage... 00:20:39.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:39.875 07:13:23 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.875 07:13:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.875 07:13:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.875 07:13:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.875 07:13:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.875 07:13:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.875 07:13:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.875 07:13:23 -- paths/export.sh@5 -- # export PATH 00:20:39.875 07:13:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.875 07:13:23 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:39.875 07:13:23 -- nvmf/common.sh@7 -- # uname -s 00:20:39.875 07:13:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.875 07:13:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.875 07:13:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.875 07:13:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.875 07:13:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.875 07:13:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.875 07:13:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.875 07:13:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.875 07:13:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.875 07:13:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.875 07:13:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:20:39.875 07:13:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:20:39.875 07:13:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.875 07:13:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.875 07:13:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:39.875 07:13:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.875 07:13:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.875 07:13:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.875 07:13:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.875 07:13:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.875 07:13:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.875 07:13:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.876 07:13:23 -- paths/export.sh@5 -- # export PATH 00:20:39.876 07:13:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.876 07:13:23 -- nvmf/common.sh@46 -- # : 0 00:20:39.876 07:13:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:39.876 07:13:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:39.876 07:13:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:39.876 07:13:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.876 07:13:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.876 07:13:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:39.876 07:13:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:39.876 07:13:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:39.876 07:13:23 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:39.876 07:13:23 -- host/fio.sh@14 -- # nvmftestinit 00:20:39.876 07:13:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:39.876 07:13:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.876 07:13:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:39.876 07:13:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:39.876 07:13:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:39.876 07:13:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.876 07:13:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.876 07:13:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.876 07:13:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:39.876 07:13:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:39.876 07:13:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:39.876 07:13:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:39.876 07:13:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:39.876 07:13:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:39.876 07:13:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.876 07:13:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.876 07:13:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:39.876 07:13:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:39.876 07:13:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:39.876 07:13:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:39.876 07:13:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:39.876 07:13:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.876 07:13:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:39.876 07:13:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:39.876 07:13:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:39.876 07:13:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:39.876 07:13:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:39.876 07:13:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:39.876 Cannot find device "nvmf_tgt_br" 00:20:39.876 07:13:23 -- nvmf/common.sh@154 -- # true 00:20:39.876 07:13:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:39.876 Cannot find device "nvmf_tgt_br2" 00:20:39.876 07:13:23 -- nvmf/common.sh@155 -- # true 00:20:39.876 07:13:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:39.876 07:13:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:39.876 Cannot find device "nvmf_tgt_br" 00:20:39.876 07:13:23 -- nvmf/common.sh@157 -- # true 00:20:39.876 07:13:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:39.876 Cannot find device "nvmf_tgt_br2" 00:20:39.876 07:13:23 -- nvmf/common.sh@158 -- # true 00:20:39.876 07:13:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:39.876 07:13:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:39.876 07:13:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:39.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.876 07:13:23 -- nvmf/common.sh@161 -- # true 00:20:39.876 07:13:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:39.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.876 07:13:23 -- nvmf/common.sh@162 -- # true 00:20:39.876 07:13:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:39.876 07:13:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:39.876 07:13:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:39.876 07:13:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:39.876 07:13:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.134 07:13:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.134 07:13:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:40.134 07:13:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:40.134 07:13:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:40.134 07:13:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:40.134 07:13:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:40.134 07:13:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:40.134 07:13:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:40.134 07:13:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:40.134 07:13:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:40.134 07:13:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:40.134 07:13:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:40.134 07:13:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:40.134 07:13:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:40.134 07:13:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.134 07:13:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.134 07:13:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.134 07:13:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.134 07:13:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:40.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:20:40.134 00:20:40.134 --- 10.0.0.2 ping statistics --- 00:20:40.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.134 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:20:40.134 07:13:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:40.134 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.134 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:20:40.134 00:20:40.134 --- 10.0.0.3 ping statistics --- 00:20:40.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.134 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:40.134 07:13:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:40.134 00:20:40.134 --- 10.0.0.1 ping statistics --- 00:20:40.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.134 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:40.134 07:13:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.134 07:13:24 -- nvmf/common.sh@421 -- # return 0 00:20:40.134 07:13:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:40.134 07:13:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.134 07:13:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:40.134 07:13:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:40.134 07:13:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.134 07:13:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:40.134 07:13:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:40.134 07:13:24 -- host/fio.sh@16 -- # [[ y != y ]] 00:20:40.134 07:13:24 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:40.134 07:13:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:40.134 07:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:40.134 07:13:24 -- host/fio.sh@24 -- # nvmfpid=83286 00:20:40.134 07:13:24 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:40.134 07:13:24 -- host/fio.sh@28 -- # waitforlisten 83286 00:20:40.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.134 07:13:24 -- common/autotest_common.sh@819 -- # '[' -z 83286 ']' 00:20:40.134 07:13:24 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:40.134 07:13:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.134 07:13:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:40.134 07:13:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.134 07:13:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:40.134 07:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:40.134 [2024-07-11 07:13:24.183948] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:40.134 [2024-07-11 07:13:24.184035] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.392 [2024-07-11 07:13:24.323542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.392 [2024-07-11 07:13:24.420204] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:40.392 [2024-07-11 07:13:24.420958] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.392 [2024-07-11 07:13:24.421182] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.392 [2024-07-11 07:13:24.421419] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.392 [2024-07-11 07:13:24.421797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.392 [2024-07-11 07:13:24.421898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.392 [2024-07-11 07:13:24.421994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.392 [2024-07-11 07:13:24.421996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.328 07:13:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:41.328 07:13:25 -- common/autotest_common.sh@852 -- # return 0 00:20:41.328 07:13:25 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:41.328 [2024-07-11 07:13:25.299025] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.328 07:13:25 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:41.328 07:13:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:41.328 07:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:41.328 07:13:25 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:41.586 Malloc1 00:20:41.844 07:13:25 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:41.844 07:13:25 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:42.103 07:13:26 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.361 [2024-07-11 07:13:26.256688] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.361 07:13:26 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:42.621 07:13:26 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:42.621 07:13:26 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:42.621 07:13:26 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:42.621 07:13:26 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:42.621 07:13:26 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:42.621 07:13:26 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:42.621 07:13:26 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:42.621 07:13:26 -- common/autotest_common.sh@1320 -- # shift 00:20:42.621 07:13:26 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:42.621 07:13:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.621 07:13:26 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:42.621 07:13:26 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:42.621 07:13:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:42.621 07:13:26 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:42.621 07:13:26 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:42.621 07:13:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.621 07:13:26 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:42.621 07:13:26 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:42.621 07:13:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:42.621 07:13:26 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:42.621 07:13:26 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:42.621 07:13:26 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:42.621 07:13:26 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:42.621 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:42.621 fio-3.35 00:20:42.621 Starting 1 thread 00:20:45.176 00:20:45.176 test: (groupid=0, jobs=1): err= 0: pid=83413: Thu Jul 11 07:13:28 2024 00:20:45.177 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(90.3MiB/2005msec) 00:20:45.177 slat (nsec): min=1742, max=352714, avg=2366.58, stdev=3254.52 00:20:45.177 clat (usec): min=3523, max=11256, avg=5927.24, stdev=512.12 00:20:45.177 lat (usec): min=3556, max=11266, avg=5929.61, stdev=512.24 00:20:45.177 clat percentiles (usec): 00:20:45.177 | 1.00th=[ 4948], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:20:45.177 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5866], 60.00th=[ 5997], 00:20:45.177 | 70.00th=[ 6128], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6718], 00:20:45.177 | 99.00th=[ 7177], 99.50th=[ 8160], 99.90th=[10552], 99.95th=[11076], 00:20:45.177 | 99.99th=[11207] 00:20:45.177 bw ( KiB/s): min=45304, max=46472, per=99.98%, avg=46096.00, stdev=549.66, samples=4 00:20:45.177 iops : min=11326, max=11618, avg=11524.00, stdev=137.41, samples=4 00:20:45.177 write: IOPS=11.4k, BW=44.7MiB/s (46.9MB/s)(89.7MiB/2005msec); 0 zone resets 00:20:45.177 slat (nsec): min=1810, max=279365, avg=2469.40, stdev=2648.57 00:20:45.177 clat (usec): min=2581, max=9770, avg=5172.02, stdev=407.70 00:20:45.177 lat (usec): min=2595, max=9772, avg=5174.49, stdev=407.84 00:20:45.177 clat percentiles (usec): 00:20:45.177 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4883], 00:20:45.177 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5145], 60.00th=[ 5276], 00:20:45.177 | 70.00th=[ 5342], 80.00th=[ 5473], 90.00th=[ 5604], 95.00th=[ 5735], 00:20:45.177 | 99.00th=[ 6063], 99.50th=[ 7046], 99.90th=[ 8291], 99.95th=[ 9110], 00:20:45.177 | 99.99th=[ 9634] 00:20:45.177 bw ( KiB/s): min=45184, max=46336, per=99.97%, avg=45780.00, stdev=520.62, samples=4 00:20:45.177 iops : min=11296, max=11584, avg=11445.00, stdev=130.15, samples=4 00:20:45.177 lat (msec) : 4=0.09%, 10=99.84%, 20=0.07% 00:20:45.177 cpu : usr=61.48%, sys=26.00%, ctx=14, majf=0, minf=5 00:20:45.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:45.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:45.177 issued rwts: total=23111,22954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:45.177 00:20:45.177 Run status group 0 (all jobs): 00:20:45.177 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=90.3MiB (94.7MB), run=2005-2005msec 00:20:45.177 WRITE: bw=44.7MiB/s (46.9MB/s), 44.7MiB/s-44.7MiB/s (46.9MB/s-46.9MB/s), io=89.7MiB (94.0MB), run=2005-2005msec 00:20:45.177 07:13:28 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:45.177 07:13:28 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:45.177 07:13:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:45.177 07:13:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:45.177 07:13:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:45.177 07:13:28 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:45.177 07:13:28 -- common/autotest_common.sh@1320 -- # shift 00:20:45.177 07:13:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:45.177 07:13:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:45.177 07:13:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:45.177 07:13:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:45.177 07:13:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:45.177 07:13:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:45.177 07:13:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:45.177 07:13:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:45.177 07:13:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:45.177 07:13:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:45.177 07:13:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:45.177 07:13:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:45.177 07:13:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:45.177 07:13:29 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:45.177 07:13:29 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:45.177 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:45.177 fio-3.35 00:20:45.177 Starting 1 thread 00:20:47.712 00:20:47.712 test: (groupid=0, jobs=1): err= 0: pid=83458: Thu Jul 11 07:13:31 2024 00:20:47.712 read: IOPS=9245, BW=144MiB/s (151MB/s)(290MiB/2009msec) 00:20:47.712 slat (usec): min=2, max=103, avg= 3.30, stdev= 2.24 00:20:47.712 clat (usec): min=2005, max=16892, avg=8281.22, stdev=2045.54 00:20:47.712 lat (usec): min=2008, max=16894, avg=8284.52, stdev=2045.62 00:20:47.712 clat percentiles (usec): 00:20:47.712 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6390], 00:20:47.712 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8160], 60.00th=[ 8717], 00:20:47.712 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10945], 95.00th=[11731], 00:20:47.712 | 99.00th=[13829], 99.50th=[14353], 99.90th=[15270], 99.95th=[15533], 00:20:47.712 | 99.99th=[16909] 00:20:47.712 bw ( KiB/s): min=59552, max=87392, per=50.24%, avg=74328.00, stdev=11432.81, samples=4 00:20:47.712 iops : min= 3722, max= 5462, avg=4645.50, stdev=714.55, samples=4 00:20:47.712 write: IOPS=5451, BW=85.2MiB/s (89.3MB/s)(153MiB/1791msec); 0 zone resets 00:20:47.712 slat (usec): min=29, max=325, avg=33.49, stdev= 8.67 00:20:47.712 clat (usec): min=3718, max=16254, avg=9985.14, stdev=1747.72 00:20:47.712 lat (usec): min=3749, max=16287, avg=10018.64, stdev=1747.88 00:20:47.712 clat percentiles (usec): 00:20:47.712 | 1.00th=[ 6652], 5.00th=[ 7504], 10.00th=[ 7963], 20.00th=[ 8455], 00:20:47.712 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10159], 00:20:47.712 | 70.00th=[10683], 80.00th=[11338], 90.00th=[12518], 95.00th=[13304], 00:20:47.712 | 99.00th=[14484], 99.50th=[15008], 99.90th=[16057], 99.95th=[16057], 00:20:47.712 | 99.99th=[16319] 00:20:47.712 bw ( KiB/s): min=62304, max=90752, per=88.91%, avg=77544.00, stdev=11694.25, samples=4 00:20:47.712 iops : min= 3894, max= 5672, avg=4846.50, stdev=730.89, samples=4 00:20:47.712 lat (msec) : 4=0.43%, 10=70.47%, 20=29.10% 00:20:47.712 cpu : usr=71.36%, sys=18.43%, ctx=3, majf=0, minf=22 00:20:47.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:20:47.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:47.712 issued rwts: total=18575,9763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:47.712 00:20:47.712 Run status group 0 (all jobs): 00:20:47.712 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=290MiB (304MB), run=2009-2009msec 00:20:47.712 WRITE: bw=85.2MiB/s (89.3MB/s), 85.2MiB/s-85.2MiB/s (89.3MB/s-89.3MB/s), io=153MiB (160MB), run=1791-1791msec 00:20:47.712 07:13:31 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:47.712 07:13:31 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:20:47.712 07:13:31 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:20:47.712 07:13:31 -- host/fio.sh@51 -- # get_nvme_bdfs 00:20:47.712 07:13:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:47.712 07:13:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:20:47.712 07:13:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:47.712 07:13:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:47.712 07:13:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:47.712 07:13:31 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:20:47.712 07:13:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:47.712 07:13:31 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:20:47.971 Nvme0n1 00:20:47.971 07:13:31 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:20:48.230 07:13:32 -- host/fio.sh@53 -- # ls_guid=b0e7fb40-630e-4d14-a5e6-1f1d3d2b8741 00:20:48.230 07:13:32 -- host/fio.sh@54 -- # get_lvs_free_mb b0e7fb40-630e-4d14-a5e6-1f1d3d2b8741 00:20:48.230 07:13:32 -- common/autotest_common.sh@1343 -- # local lvs_uuid=b0e7fb40-630e-4d14-a5e6-1f1d3d2b8741 00:20:48.230 07:13:32 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:48.230 07:13:32 -- common/autotest_common.sh@1345 -- # local fc 00:20:48.230 07:13:32 -- common/autotest_common.sh@1346 -- # local cs 00:20:48.230 07:13:32 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:48.488 07:13:32 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:48.488 { 00:20:48.488 "base_bdev": "Nvme0n1", 00:20:48.488 "block_size": 4096, 00:20:48.488 "cluster_size": 1073741824, 00:20:48.488 "free_clusters": 4, 00:20:48.488 "name": "lvs_0", 00:20:48.488 "total_data_clusters": 4, 00:20:48.488 "uuid": "b0e7fb40-630e-4d14-a5e6-1f1d3d2b8741" 00:20:48.488 } 00:20:48.488 ]' 00:20:48.488 07:13:32 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="b0e7fb40-630e-4d14-a5e6-1f1d3d2b8741") .free_clusters' 00:20:48.488 07:13:32 -- common/autotest_common.sh@1348 -- # fc=4 00:20:48.488 07:13:32 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="b0e7fb40-630e-4d14-a5e6-1f1d3d2b8741") .cluster_size' 00:20:48.747 07:13:32 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:20:48.747 07:13:32 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:20:48.747 4096 00:20:48.747 07:13:32 -- common/autotest_common.sh@1353 -- # echo 4096 00:20:48.747 07:13:32 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:20:48.747 94ca27b5-115b-48e1-831c-7eade10caca2 00:20:48.747 07:13:32 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:20:49.006 07:13:32 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:20:49.264 07:13:33 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:49.522 07:13:33 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:49.522 07:13:33 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:49.522 07:13:33 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:49.522 07:13:33 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:49.522 07:13:33 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:49.522 07:13:33 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:49.522 07:13:33 -- common/autotest_common.sh@1320 -- # shift 00:20:49.522 07:13:33 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:49.522 07:13:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:49.522 07:13:33 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:49.522 07:13:33 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:49.522 07:13:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:49.522 07:13:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:49.522 07:13:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:49.522 07:13:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:49.522 07:13:33 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:49.522 07:13:33 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:49.522 07:13:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:49.522 07:13:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:49.522 07:13:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:49.522 07:13:33 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:49.522 07:13:33 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:49.522 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:49.522 fio-3.35 00:20:49.522 Starting 1 thread 00:20:52.051 00:20:52.051 test: (groupid=0, jobs=1): err= 0: pid=83610: Thu Jul 11 07:13:35 2024 00:20:52.051 read: IOPS=7990, BW=31.2MiB/s (32.7MB/s)(62.6MiB/2006msec) 00:20:52.051 slat (nsec): min=1817, max=357253, avg=2860.77, stdev=4776.77 00:20:52.051 clat (usec): min=3674, max=14935, avg=8608.92, stdev=853.44 00:20:52.051 lat (usec): min=3684, max=14937, avg=8611.78, stdev=853.34 00:20:52.051 clat percentiles (usec): 00:20:52.051 | 1.00th=[ 6783], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7898], 00:20:52.051 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:20:52.051 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[10028], 00:20:52.051 | 99.00th=[10683], 99.50th=[10945], 99.90th=[13435], 99.95th=[14615], 00:20:52.051 | 99.99th=[14877] 00:20:52.051 bw ( KiB/s): min=30512, max=32528, per=99.88%, avg=31924.00, stdev=949.85, samples=4 00:20:52.051 iops : min= 7628, max= 8132, avg=7981.00, stdev=237.46, samples=4 00:20:52.051 write: IOPS=7966, BW=31.1MiB/s (32.6MB/s)(62.4MiB/2006msec); 0 zone resets 00:20:52.051 slat (nsec): min=1908, max=299892, avg=2936.77, stdev=3470.76 00:20:52.051 clat (usec): min=2683, max=13669, avg=7357.27, stdev=725.89 00:20:52.051 lat (usec): min=2695, max=13671, avg=7360.21, stdev=725.88 00:20:52.051 clat percentiles (usec): 00:20:52.051 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 6783], 00:20:52.051 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7504], 00:20:52.051 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8455], 00:20:52.051 | 99.00th=[ 8979], 99.50th=[ 9372], 99.90th=[12256], 99.95th=[12780], 00:20:52.051 | 99.99th=[13566] 00:20:52.051 bw ( KiB/s): min=31560, max=32000, per=99.92%, avg=31842.00, stdev=204.10, samples=4 00:20:52.051 iops : min= 7890, max= 8000, avg=7960.50, stdev=51.03, samples=4 00:20:52.051 lat (msec) : 4=0.05%, 10=97.46%, 20=2.49% 00:20:52.051 cpu : usr=63.19%, sys=25.64%, ctx=24, majf=0, minf=24 00:20:52.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:52.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.051 issued rwts: total=16029,15981,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.051 00:20:52.051 Run status group 0 (all jobs): 00:20:52.051 READ: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=62.6MiB (65.7MB), run=2006-2006msec 00:20:52.051 WRITE: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=62.4MiB (65.5MB), run=2006-2006msec 00:20:52.051 07:13:35 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:52.051 07:13:36 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:20:52.310 07:13:36 -- host/fio.sh@64 -- # ls_nested_guid=7039c17a-595f-4836-853d-1d66b62f181a 00:20:52.310 07:13:36 -- host/fio.sh@65 -- # get_lvs_free_mb 7039c17a-595f-4836-853d-1d66b62f181a 00:20:52.310 07:13:36 -- common/autotest_common.sh@1343 -- # local lvs_uuid=7039c17a-595f-4836-853d-1d66b62f181a 00:20:52.310 07:13:36 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:52.310 07:13:36 -- common/autotest_common.sh@1345 -- # local fc 00:20:52.310 07:13:36 -- common/autotest_common.sh@1346 -- # local cs 00:20:52.310 07:13:36 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:52.568 07:13:36 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:52.568 { 00:20:52.568 "base_bdev": "Nvme0n1", 00:20:52.568 "block_size": 4096, 00:20:52.568 "cluster_size": 1073741824, 00:20:52.568 "free_clusters": 0, 00:20:52.568 "name": "lvs_0", 00:20:52.568 "total_data_clusters": 4, 00:20:52.568 "uuid": "b0e7fb40-630e-4d14-a5e6-1f1d3d2b8741" 00:20:52.568 }, 00:20:52.568 { 00:20:52.568 "base_bdev": "94ca27b5-115b-48e1-831c-7eade10caca2", 00:20:52.568 "block_size": 4096, 00:20:52.568 "cluster_size": 4194304, 00:20:52.568 "free_clusters": 1022, 00:20:52.568 "name": "lvs_n_0", 00:20:52.568 "total_data_clusters": 1022, 00:20:52.568 "uuid": "7039c17a-595f-4836-853d-1d66b62f181a" 00:20:52.568 } 00:20:52.568 ]' 00:20:52.568 07:13:36 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="7039c17a-595f-4836-853d-1d66b62f181a") .free_clusters' 00:20:52.568 07:13:36 -- common/autotest_common.sh@1348 -- # fc=1022 00:20:52.568 07:13:36 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="7039c17a-595f-4836-853d-1d66b62f181a") .cluster_size' 00:20:52.568 4088 00:20:52.568 07:13:36 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:52.568 07:13:36 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:20:52.568 07:13:36 -- common/autotest_common.sh@1353 -- # echo 4088 00:20:52.568 07:13:36 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:20:52.826 2c156d01-0b44-4404-a934-bcdce2dffe8a 00:20:52.826 07:13:36 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:20:53.084 07:13:37 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:20:53.343 07:13:37 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:53.601 07:13:37 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:53.601 07:13:37 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:53.601 07:13:37 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:53.601 07:13:37 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:53.602 07:13:37 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:53.602 07:13:37 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:53.602 07:13:37 -- common/autotest_common.sh@1320 -- # shift 00:20:53.602 07:13:37 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:53.602 07:13:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:53.602 07:13:37 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:53.602 07:13:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:53.602 07:13:37 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:53.602 07:13:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:53.602 07:13:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:53.602 07:13:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:53.602 07:13:37 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:53.602 07:13:37 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:53.602 07:13:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:53.602 07:13:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:53.602 07:13:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:53.602 07:13:37 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:53.602 07:13:37 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:53.602 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:53.602 fio-3.35 00:20:53.602 Starting 1 thread 00:20:56.133 00:20:56.133 test: (groupid=0, jobs=1): err= 0: pid=83730: Thu Jul 11 07:13:39 2024 00:20:56.133 read: IOPS=6757, BW=26.4MiB/s (27.7MB/s)(53.0MiB/2008msec) 00:20:56.133 slat (nsec): min=1762, max=350422, avg=2811.88, stdev=4633.87 00:20:56.133 clat (usec): min=4235, max=16481, avg=10192.12, stdev=990.82 00:20:56.133 lat (usec): min=4245, max=16484, avg=10194.93, stdev=990.63 00:20:56.133 clat percentiles (usec): 00:20:56.133 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:20:56.133 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:20:56.133 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11469], 95.00th=[11863], 00:20:56.133 | 99.00th=[12387], 99.50th=[12780], 99.90th=[14615], 99.95th=[15664], 00:20:56.133 | 99.99th=[16319] 00:20:56.133 bw ( KiB/s): min=26011, max=27616, per=99.81%, avg=26978.75, stdev=690.41, samples=4 00:20:56.133 iops : min= 6502, max= 6904, avg=6744.50, stdev=172.95, samples=4 00:20:56.133 write: IOPS=6756, BW=26.4MiB/s (27.7MB/s)(53.0MiB/2008msec); 0 zone resets 00:20:56.133 slat (nsec): min=1835, max=241635, avg=2907.24, stdev=3436.71 00:20:56.133 clat (usec): min=2655, max=15820, avg=8663.26, stdev=833.05 00:20:56.133 lat (usec): min=2668, max=15823, avg=8666.17, stdev=832.92 00:20:56.133 clat percentiles (usec): 00:20:56.133 | 1.00th=[ 6783], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 8029], 00:20:56.133 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 00:20:56.133 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[ 9896], 00:20:56.133 | 99.00th=[10552], 99.50th=[10814], 99.90th=[14222], 99.95th=[14746], 00:20:56.133 | 99.99th=[15795] 00:20:56.133 bw ( KiB/s): min=26752, max=27192, per=99.95%, avg=27014.25, stdev=187.90, samples=4 00:20:56.133 iops : min= 6688, max= 6798, avg=6753.50, stdev=46.97, samples=4 00:20:56.133 lat (msec) : 4=0.03%, 10=69.38%, 20=30.59% 00:20:56.133 cpu : usr=68.36%, sys=23.72%, ctx=4, majf=0, minf=24 00:20:56.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:56.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.133 issued rwts: total=13569,13568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.133 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.133 00:20:56.133 Run status group 0 (all jobs): 00:20:56.133 READ: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=53.0MiB (55.6MB), run=2008-2008msec 00:20:56.133 WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=53.0MiB (55.6MB), run=2008-2008msec 00:20:56.133 07:13:39 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:56.133 07:13:40 -- host/fio.sh@74 -- # sync 00:20:56.133 07:13:40 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:20:56.392 07:13:40 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:56.651 07:13:40 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:20:56.909 07:13:40 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:57.168 07:13:41 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:20:58.544 07:13:42 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:58.544 07:13:42 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:58.544 07:13:42 -- host/fio.sh@86 -- # nvmftestfini 00:20:58.544 07:13:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:58.544 07:13:42 -- nvmf/common.sh@116 -- # sync 00:20:58.544 07:13:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:58.544 07:13:42 -- nvmf/common.sh@119 -- # set +e 00:20:58.544 07:13:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:58.544 07:13:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:58.544 rmmod nvme_tcp 00:20:58.544 rmmod nvme_fabrics 00:20:58.544 rmmod nvme_keyring 00:20:58.544 07:13:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:58.544 07:13:42 -- nvmf/common.sh@123 -- # set -e 00:20:58.544 07:13:42 -- nvmf/common.sh@124 -- # return 0 00:20:58.544 07:13:42 -- nvmf/common.sh@477 -- # '[' -n 83286 ']' 00:20:58.544 07:13:42 -- nvmf/common.sh@478 -- # killprocess 83286 00:20:58.544 07:13:42 -- common/autotest_common.sh@926 -- # '[' -z 83286 ']' 00:20:58.544 07:13:42 -- common/autotest_common.sh@930 -- # kill -0 83286 00:20:58.544 07:13:42 -- common/autotest_common.sh@931 -- # uname 00:20:58.544 07:13:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:58.544 07:13:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83286 00:20:58.544 killing process with pid 83286 00:20:58.544 07:13:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:58.544 07:13:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:58.544 07:13:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83286' 00:20:58.544 07:13:42 -- common/autotest_common.sh@945 -- # kill 83286 00:20:58.544 07:13:42 -- common/autotest_common.sh@950 -- # wait 83286 00:20:58.544 07:13:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:58.544 07:13:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:58.544 07:13:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:58.544 07:13:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:58.544 07:13:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:58.544 07:13:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.544 07:13:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.544 07:13:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.544 07:13:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:58.544 00:20:58.544 real 0m18.923s 00:20:58.544 user 1m22.100s 00:20:58.544 sys 0m4.550s 00:20:58.544 07:13:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:58.544 07:13:42 -- common/autotest_common.sh@10 -- # set +x 00:20:58.544 ************************************ 00:20:58.544 END TEST nvmf_fio_host 00:20:58.544 ************************************ 00:20:58.803 07:13:42 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:58.804 07:13:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:58.804 07:13:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:58.804 07:13:42 -- common/autotest_common.sh@10 -- # set +x 00:20:58.804 ************************************ 00:20:58.804 START TEST nvmf_failover 00:20:58.804 ************************************ 00:20:58.804 07:13:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:58.804 * Looking for test storage... 00:20:58.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:58.804 07:13:42 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:58.804 07:13:42 -- nvmf/common.sh@7 -- # uname -s 00:20:58.804 07:13:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.804 07:13:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.804 07:13:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.804 07:13:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.804 07:13:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.804 07:13:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.804 07:13:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.804 07:13:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.804 07:13:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.804 07:13:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.804 07:13:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:20:58.804 07:13:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:20:58.804 07:13:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.804 07:13:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.804 07:13:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:58.804 07:13:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:58.804 07:13:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.804 07:13:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.804 07:13:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.804 07:13:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.804 07:13:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.804 07:13:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.804 07:13:42 -- paths/export.sh@5 -- # export PATH 00:20:58.804 07:13:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.804 07:13:42 -- nvmf/common.sh@46 -- # : 0 00:20:58.804 07:13:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:58.804 07:13:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:58.804 07:13:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:58.804 07:13:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.804 07:13:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.804 07:13:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:58.804 07:13:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:58.804 07:13:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:58.804 07:13:42 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:58.804 07:13:42 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:58.804 07:13:42 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:58.804 07:13:42 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:58.804 07:13:42 -- host/failover.sh@18 -- # nvmftestinit 00:20:58.804 07:13:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:58.804 07:13:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.804 07:13:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:58.804 07:13:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:58.804 07:13:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:58.804 07:13:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.804 07:13:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.804 07:13:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.804 07:13:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:58.804 07:13:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:58.804 07:13:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:58.804 07:13:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:58.804 07:13:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:58.804 07:13:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:58.804 07:13:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.804 07:13:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.804 07:13:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:58.804 07:13:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:58.804 07:13:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:58.804 07:13:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:58.804 07:13:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:58.804 07:13:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.804 07:13:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:58.804 07:13:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:58.804 07:13:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:58.804 07:13:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:58.804 07:13:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:58.804 07:13:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:58.804 Cannot find device "nvmf_tgt_br" 00:20:58.804 07:13:42 -- nvmf/common.sh@154 -- # true 00:20:58.804 07:13:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:58.804 Cannot find device "nvmf_tgt_br2" 00:20:58.804 07:13:42 -- nvmf/common.sh@155 -- # true 00:20:58.804 07:13:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:58.804 07:13:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:58.804 Cannot find device "nvmf_tgt_br" 00:20:58.804 07:13:42 -- nvmf/common.sh@157 -- # true 00:20:58.804 07:13:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:58.804 Cannot find device "nvmf_tgt_br2" 00:20:58.804 07:13:42 -- nvmf/common.sh@158 -- # true 00:20:58.804 07:13:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:58.804 07:13:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:58.804 07:13:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:58.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:58.804 07:13:42 -- nvmf/common.sh@161 -- # true 00:20:58.804 07:13:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:58.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:58.804 07:13:42 -- nvmf/common.sh@162 -- # true 00:20:58.804 07:13:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:58.804 07:13:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:58.804 07:13:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:58.804 07:13:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:59.063 07:13:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:59.063 07:13:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:59.063 07:13:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:59.063 07:13:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:59.063 07:13:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:59.063 07:13:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:59.063 07:13:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:59.063 07:13:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:59.063 07:13:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:59.063 07:13:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:59.063 07:13:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:59.063 07:13:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:59.063 07:13:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:59.063 07:13:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:59.063 07:13:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:59.063 07:13:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:59.063 07:13:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:59.063 07:13:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:59.063 07:13:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:59.063 07:13:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:59.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:20:59.063 00:20:59.063 --- 10.0.0.2 ping statistics --- 00:20:59.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.063 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:59.063 07:13:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:59.063 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:59.063 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:59.063 00:20:59.063 --- 10.0.0.3 ping statistics --- 00:20:59.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.063 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:59.063 07:13:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:59.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:59.063 00:20:59.063 --- 10.0.0.1 ping statistics --- 00:20:59.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.063 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:59.063 07:13:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.063 07:13:43 -- nvmf/common.sh@421 -- # return 0 00:20:59.063 07:13:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:59.063 07:13:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.063 07:13:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:59.063 07:13:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:59.063 07:13:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.063 07:13:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:59.063 07:13:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:59.063 07:13:43 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:59.063 07:13:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:59.063 07:13:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:59.063 07:13:43 -- common/autotest_common.sh@10 -- # set +x 00:20:59.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.063 07:13:43 -- nvmf/common.sh@469 -- # nvmfpid=84001 00:20:59.063 07:13:43 -- nvmf/common.sh@470 -- # waitforlisten 84001 00:20:59.063 07:13:43 -- common/autotest_common.sh@819 -- # '[' -z 84001 ']' 00:20:59.063 07:13:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:59.063 07:13:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.063 07:13:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:59.063 07:13:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.063 07:13:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:59.063 07:13:43 -- common/autotest_common.sh@10 -- # set +x 00:20:59.063 [2024-07-11 07:13:43.102877] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:59.063 [2024-07-11 07:13:43.102939] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.322 [2024-07-11 07:13:43.236016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:59.322 [2024-07-11 07:13:43.311158] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:59.322 [2024-07-11 07:13:43.311324] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.322 [2024-07-11 07:13:43.311338] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.322 [2024-07-11 07:13:43.311357] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.322 [2024-07-11 07:13:43.311752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.322 [2024-07-11 07:13:43.311947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.322 [2024-07-11 07:13:43.311958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.258 07:13:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:00.258 07:13:44 -- common/autotest_common.sh@852 -- # return 0 00:21:00.258 07:13:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:00.258 07:13:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:00.258 07:13:44 -- common/autotest_common.sh@10 -- # set +x 00:21:00.258 07:13:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.258 07:13:44 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:00.517 [2024-07-11 07:13:44.399832] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.517 07:13:44 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:00.776 Malloc0 00:21:00.776 07:13:44 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:00.776 07:13:44 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:01.035 07:13:45 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.294 [2024-07-11 07:13:45.181595] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.294 07:13:45 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:01.553 [2024-07-11 07:13:45.454119] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:01.553 07:13:45 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:01.812 [2024-07-11 07:13:45.646391] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:01.812 07:13:45 -- host/failover.sh@31 -- # bdevperf_pid=84108 00:21:01.812 07:13:45 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.812 07:13:45 -- host/failover.sh@34 -- # waitforlisten 84108 /var/tmp/bdevperf.sock 00:21:01.812 07:13:45 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:01.812 07:13:45 -- common/autotest_common.sh@819 -- # '[' -z 84108 ']' 00:21:01.812 07:13:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.812 07:13:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:01.812 07:13:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.812 07:13:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:01.812 07:13:45 -- common/autotest_common.sh@10 -- # set +x 00:21:02.749 07:13:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:02.749 07:13:46 -- common/autotest_common.sh@852 -- # return 0 00:21:02.749 07:13:46 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:02.749 NVMe0n1 00:21:03.008 07:13:46 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:03.008 00:21:03.267 07:13:47 -- host/failover.sh@39 -- # run_test_pid=84160 00:21:03.267 07:13:47 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:03.267 07:13:47 -- host/failover.sh@41 -- # sleep 1 00:21:04.203 07:13:48 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.462 [2024-07-11 07:13:48.275875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.275944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.275968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.275977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.275986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.275994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 [2024-07-11 07:13:48.276486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3c20 is same with the state(5) to be set 00:21:04.462 07:13:48 -- host/failover.sh@45 -- # sleep 3 00:21:07.752 07:13:51 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:07.752 00:21:07.752 07:13:51 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:08.010 [2024-07-11 07:13:51.850366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.010 [2024-07-11 07:13:51.850799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.850999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.851006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.851013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.851020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.851027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.851034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.851057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.851064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 [2024-07-11 07:13:51.851071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4a90 is same with the state(5) to be set 00:21:08.011 07:13:51 -- host/failover.sh@50 -- # sleep 3 00:21:11.295 07:13:54 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.295 [2024-07-11 07:13:55.060376] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.295 07:13:55 -- host/failover.sh@55 -- # sleep 1 00:21:12.258 07:13:56 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:12.517 [2024-07-11 07:13:56.323704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.323987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.517 [2024-07-11 07:13:56.324171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 [2024-07-11 07:13:56.324363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859d90 is same with the state(5) to be set 00:21:12.518 07:13:56 -- host/failover.sh@59 -- # wait 84160 00:21:19.085 0 00:21:19.085 07:14:02 -- host/failover.sh@61 -- # killprocess 84108 00:21:19.085 07:14:02 -- common/autotest_common.sh@926 -- # '[' -z 84108 ']' 00:21:19.085 07:14:02 -- common/autotest_common.sh@930 -- # kill -0 84108 00:21:19.085 07:14:02 -- common/autotest_common.sh@931 -- # uname 00:21:19.085 07:14:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:19.085 07:14:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84108 00:21:19.085 killing process with pid 84108 00:21:19.085 07:14:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:19.085 07:14:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:19.085 07:14:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84108' 00:21:19.085 07:14:02 -- common/autotest_common.sh@945 -- # kill 84108 00:21:19.085 07:14:02 -- common/autotest_common.sh@950 -- # wait 84108 00:21:19.085 07:14:02 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:19.085 [2024-07-11 07:13:45.724882] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:19.085 [2024-07-11 07:13:45.724998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84108 ] 00:21:19.085 [2024-07-11 07:13:45.868689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.085 [2024-07-11 07:13:45.965745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.085 Running I/O for 15 seconds... 00:21:19.085 [2024-07-11 07:13:48.276997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.085 [2024-07-11 07:13:48.277060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.085 [2024-07-11 07:13:48.277100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.085 [2024-07-11 07:13:48.277114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.085 [2024-07-11 07:13:48.277128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.085 [2024-07-11 07:13:48.277140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.085 [2024-07-11 07:13:48.277154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.085 [2024-07-11 07:13:48.277165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.085 [2024-07-11 07:13:48.277178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.085 [2024-07-11 07:13:48.277190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.085 [2024-07-11 07:13:48.277203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.085 [2024-07-11 07:13:48.277215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.085 [2024-07-11 07:13:48.277228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.085 [2024-07-11 07:13:48.277239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.085 [2024-07-11 07:13:48.277252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.085 [2024-07-11 07:13:48.277263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.085 [2024-07-11 07:13:48.277277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.085 [2024-07-11 07:13:48.277288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.085 [2024-07-11 07:13:48.277301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.085 [2024-07-11 07:13:48.277313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.085 [2024-07-11 07:13:48.277326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.085 [2024-07-11 07:13:48.277338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.086 [2024-07-11 07:13:48.277676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.086 [2024-07-11 07:13:48.277703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.086 [2024-07-11 07:13:48.277746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.086 [2024-07-11 07:13:48.277884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.277980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.277999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.086 [2024-07-11 07:13:48.278162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.086 [2024-07-11 07:13:48.278235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.086 [2024-07-11 07:13:48.278272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.086 [2024-07-11 07:13:48.278390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.086 [2024-07-11 07:13:48.278418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.086 [2024-07-11 07:13:48.278491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.086 [2024-07-11 07:13:48.278582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.086 [2024-07-11 07:13:48.278619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.086 [2024-07-11 07:13:48.278753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.086 [2024-07-11 07:13:48.278767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.278779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.278822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.278833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.278846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.278858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.278871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.278883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.278896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.278907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.278920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.278932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.278945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.278956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.278969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.278981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.278993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.087 [2024-07-11 07:13:48.279089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.087 [2024-07-11 07:13:48.279167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.087 [2024-07-11 07:13:48.279517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.087 [2024-07-11 07:13:48.279802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.087 [2024-07-11 07:13:48.279829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.087 [2024-07-11 07:13:48.279883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.087 [2024-07-11 07:13:48.279907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.087 [2024-07-11 07:13:48.279986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.087 [2024-07-11 07:13:48.279999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.088 [2024-07-11 07:13:48.280672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:48.280916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.280929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a56fc0 is same with the state(5) to be set 00:21:19.088 [2024-07-11 07:13:48.280943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.088 [2024-07-11 07:13:48.280952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.088 [2024-07-11 07:13:48.280961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129608 len:8 PRP1 0x0 PRP2 0x0 00:21:19.088 [2024-07-11 07:13:48.280972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.281026] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a56fc0 was disconnected and freed. reset controller. 00:21:19.088 [2024-07-11 07:13:48.281042] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:19.088 [2024-07-11 07:13:48.281101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.088 [2024-07-11 07:13:48.281120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.281133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.088 [2024-07-11 07:13:48.281144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.281156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.088 [2024-07-11 07:13:48.281167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.281179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.088 [2024-07-11 07:13:48.281190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:48.281202] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.088 [2024-07-11 07:13:48.281237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ea010 (9): Bad file descriptor 00:21:19.088 [2024-07-11 07:13:48.283533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.088 [2024-07-11 07:13:48.303191] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:19.088 [2024-07-11 07:13:51.851173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:51.851231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:51.851272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.088 [2024-07-11 07:13:51.851288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.088 [2024-07-11 07:13:51.851302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.851978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.851993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.852006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.852035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.852063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.852093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.852111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.852136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.852159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.852172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.852185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.852198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.852210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.852223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.852235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.852249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.089 [2024-07-11 07:13:51.852261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.852274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.089 [2024-07-11 07:13:51.852292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.852322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.852334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.852348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.089 [2024-07-11 07:13:51.852360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.852374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.089 [2024-07-11 07:13:51.852386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.852400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.089 [2024-07-11 07:13:51.852412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.089 [2024-07-11 07:13:51.852426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.090 [2024-07-11 07:13:51.852440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.090 [2024-07-11 07:13:51.852642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.852962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.852975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.090 [2024-07-11 07:13:51.853001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.090 [2024-07-11 07:13:51.853099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.090 [2024-07-11 07:13:51.853171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.090 [2024-07-11 07:13:51.853195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.090 [2024-07-11 07:13:51.853219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.090 [2024-07-11 07:13:51.853606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.090 [2024-07-11 07:13:51.853648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.090 [2024-07-11 07:13:51.853926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.090 [2024-07-11 07:13:51.853955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.853974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.853988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.854084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.854126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.854170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.854261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.854408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.854436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.854476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.854549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.854729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.854821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.854847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.854872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.854923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.854975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.854988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.855005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.855019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.855031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.855044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.855070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.855084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.855096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.855110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.855123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.855136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.091 [2024-07-11 07:13:51.855148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.855162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.855175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.855188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.855200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.855214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.855226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.855240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.855252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.855266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.855278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.091 [2024-07-11 07:13:51.855291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.091 [2024-07-11 07:13:51.855303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:51.855317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:51.855329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:51.855343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:51.855355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:51.855369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:51.855381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:51.855400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a58d10 is same with the state(5) to be set 00:21:19.092 [2024-07-11 07:13:51.855416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.092 [2024-07-11 07:13:51.855426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.092 [2024-07-11 07:13:51.855441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35648 len:8 PRP1 0x0 PRP2 0x0 00:21:19.092 [2024-07-11 07:13:51.855469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:51.855538] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a58d10 was disconnected and freed. reset controller. 00:21:19.092 [2024-07-11 07:13:51.855558] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:19.092 [2024-07-11 07:13:51.855618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.092 [2024-07-11 07:13:51.855637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:51.855651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.092 [2024-07-11 07:13:51.855664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:51.855677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.092 [2024-07-11 07:13:51.855689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:51.855702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.092 [2024-07-11 07:13:51.855714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:51.855727] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.092 [2024-07-11 07:13:51.855759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ea010 (9): Bad file descriptor 00:21:19.092 [2024-07-11 07:13:51.858145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.092 [2024-07-11 07:13:51.885753] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:19.092 [2024-07-11 07:13:56.324400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.092 [2024-07-11 07:13:56.324442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.324476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.092 [2024-07-11 07:13:56.324505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.324520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.092 [2024-07-11 07:13:56.324533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.324561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.092 [2024-07-11 07:13:56.324578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.324591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ea010 is same with the state(5) to be set 00:21:19.092 [2024-07-11 07:13:56.324693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.324715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.324737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.324751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.324766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.324780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.324810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.324824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.324838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.324851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.324880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.324892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.324921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.324948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.324961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.324973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.324986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.324998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.325022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.325047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.325072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.092 [2024-07-11 07:13:56.325110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.325138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.092 [2024-07-11 07:13:56.325163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.092 [2024-07-11 07:13:56.325188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.325213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.092 [2024-07-11 07:13:56.325239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.092 [2024-07-11 07:13:56.325278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.092 [2024-07-11 07:13:56.325304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.092 [2024-07-11 07:13:56.325328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.092 [2024-07-11 07:13:56.325352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.092 [2024-07-11 07:13:56.325392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.092 [2024-07-11 07:13:56.325416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.092 [2024-07-11 07:13:56.325439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.325470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.092 [2024-07-11 07:13:56.325483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.092 [2024-07-11 07:13:56.325494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.325518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.325557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.325581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.325605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.325629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.325654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.093 [2024-07-11 07:13:56.325677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.093 [2024-07-11 07:13:56.325702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.093 [2024-07-11 07:13:56.325725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.093 [2024-07-11 07:13:56.325749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.325779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.325804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.325828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.093 [2024-07-11 07:13:56.325852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.325875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.093 [2024-07-11 07:13:56.325899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.325923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.093 [2024-07-11 07:13:56.325954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.325979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.325992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.093 [2024-07-11 07:13:56.326003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.093 [2024-07-11 07:13:56.326027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.093 [2024-07-11 07:13:56.326129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.093 [2024-07-11 07:13:56.326153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.093 [2024-07-11 07:13:56.326501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.093 [2024-07-11 07:13:56.326516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.326755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.326978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.326991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.327002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.327126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.327151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.327278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.327301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.327372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.327443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.327578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.327603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.094 [2024-07-11 07:13:56.327645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.094 [2024-07-11 07:13:56.327670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.094 [2024-07-11 07:13:56.327683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.327695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.327708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.327720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.327733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.327745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.327758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.095 [2024-07-11 07:13:56.327770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.327783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.327795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.327808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.095 [2024-07-11 07:13:56.327820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.327833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.327865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.327879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.327890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.327903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.095 [2024-07-11 07:13:56.327926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.327939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.327950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.327968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.327980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.327993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.095 [2024-07-11 07:13:56.328004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.328017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.328029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.328042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.095 [2024-07-11 07:13:56.328053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.328066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.328077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.328090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.095 [2024-07-11 07:13:56.328101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.328114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.328125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.328139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.328150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.328163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.328174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.328193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.095 [2024-07-11 07:13:56.328205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.328232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:19.095 [2024-07-11 07:13:56.328244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:19.095 [2024-07-11 07:13:56.328254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52552 len:8 PRP1 0x0 PRP2 0x0 00:21:19.095 [2024-07-11 07:13:56.328265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.095 [2024-07-11 07:13:56.328318] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a58eb0 was disconnected and freed. reset controller. 00:21:19.095 [2024-07-11 07:13:56.328333] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:19.095 [2024-07-11 07:13:56.328346] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.095 [2024-07-11 07:13:56.330434] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.095 [2024-07-11 07:13:56.330483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ea010 (9): Bad file descriptor 00:21:19.095 [2024-07-11 07:13:56.348382] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:19.095 00:21:19.095 Latency(us) 00:21:19.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.095 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:19.095 Verification LBA range: start 0x0 length 0x4000 00:21:19.095 NVMe0n1 : 15.00 15134.41 59.12 231.73 0.00 8315.32 463.59 15609.48 00:21:19.095 =================================================================================================================== 00:21:19.095 Total : 15134.41 59.12 231.73 0.00 8315.32 463.59 15609.48 00:21:19.095 Received shutdown signal, test time was about 15.000000 seconds 00:21:19.095 00:21:19.095 Latency(us) 00:21:19.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.095 =================================================================================================================== 00:21:19.095 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:19.095 07:14:02 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:19.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.095 07:14:02 -- host/failover.sh@65 -- # count=3 00:21:19.095 07:14:02 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:19.095 07:14:02 -- host/failover.sh@73 -- # bdevperf_pid=84364 00:21:19.095 07:14:02 -- host/failover.sh@75 -- # waitforlisten 84364 /var/tmp/bdevperf.sock 00:21:19.095 07:14:02 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:19.095 07:14:02 -- common/autotest_common.sh@819 -- # '[' -z 84364 ']' 00:21:19.095 07:14:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.095 07:14:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:19.095 07:14:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.095 07:14:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:19.095 07:14:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.662 07:14:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:19.662 07:14:03 -- common/autotest_common.sh@852 -- # return 0 00:21:19.662 07:14:03 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:19.920 [2024-07-11 07:14:03.764956] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:19.920 07:14:03 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:19.920 [2024-07-11 07:14:03.961125] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:20.179 07:14:03 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:20.179 NVMe0n1 00:21:20.437 07:14:04 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:20.437 00:21:20.437 07:14:04 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:20.695 00:21:20.695 07:14:04 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:20.695 07:14:04 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:20.954 07:14:04 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:21.213 07:14:05 -- host/failover.sh@87 -- # sleep 3 00:21:24.496 07:14:08 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:24.496 07:14:08 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:24.496 07:14:08 -- host/failover.sh@90 -- # run_test_pid=84498 00:21:24.497 07:14:08 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.497 07:14:08 -- host/failover.sh@92 -- # wait 84498 00:21:25.872 0 00:21:25.872 07:14:09 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:25.872 [2024-07-11 07:14:02.531839] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:25.872 [2024-07-11 07:14:02.532001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84364 ] 00:21:25.872 [2024-07-11 07:14:02.664401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.872 [2024-07-11 07:14:02.743154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.872 [2024-07-11 07:14:05.174096] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:25.872 [2024-07-11 07:14:05.174215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.872 [2024-07-11 07:14:05.174237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.872 [2024-07-11 07:14:05.174254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.872 [2024-07-11 07:14:05.174267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.872 [2024-07-11 07:14:05.174316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.872 [2024-07-11 07:14:05.174334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.872 [2024-07-11 07:14:05.174348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.873 [2024-07-11 07:14:05.174361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.873 [2024-07-11 07:14:05.174375] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.873 [2024-07-11 07:14:05.174419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.873 [2024-07-11 07:14:05.174450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5f010 (9): Bad file descriptor 00:21:25.873 [2024-07-11 07:14:05.179555] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:25.873 Running I/O for 1 seconds... 00:21:25.873 00:21:25.873 Latency(us) 00:21:25.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.873 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:25.873 Verification LBA range: start 0x0 length 0x4000 00:21:25.873 NVMe0n1 : 1.01 15701.19 61.33 0.00 0.00 8118.80 1280.93 9651.67 00:21:25.873 =================================================================================================================== 00:21:25.873 Total : 15701.19 61.33 0.00 0.00 8118.80 1280.93 9651.67 00:21:25.873 07:14:09 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:25.873 07:14:09 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:25.873 07:14:09 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.132 07:14:10 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:26.132 07:14:10 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:26.390 07:14:10 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.390 07:14:10 -- host/failover.sh@101 -- # sleep 3 00:21:29.679 07:14:13 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:29.679 07:14:13 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:29.679 07:14:13 -- host/failover.sh@108 -- # killprocess 84364 00:21:29.679 07:14:13 -- common/autotest_common.sh@926 -- # '[' -z 84364 ']' 00:21:29.679 07:14:13 -- common/autotest_common.sh@930 -- # kill -0 84364 00:21:29.679 07:14:13 -- common/autotest_common.sh@931 -- # uname 00:21:29.679 07:14:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:29.679 07:14:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84364 00:21:29.679 killing process with pid 84364 00:21:29.679 07:14:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:29.679 07:14:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:29.679 07:14:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84364' 00:21:29.680 07:14:13 -- common/autotest_common.sh@945 -- # kill 84364 00:21:29.680 07:14:13 -- common/autotest_common.sh@950 -- # wait 84364 00:21:29.938 07:14:13 -- host/failover.sh@110 -- # sync 00:21:29.938 07:14:13 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.196 07:14:14 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:30.196 07:14:14 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:30.196 07:14:14 -- host/failover.sh@116 -- # nvmftestfini 00:21:30.196 07:14:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:30.196 07:14:14 -- nvmf/common.sh@116 -- # sync 00:21:30.196 07:14:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:30.196 07:14:14 -- nvmf/common.sh@119 -- # set +e 00:21:30.196 07:14:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:30.196 07:14:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:30.196 rmmod nvme_tcp 00:21:30.196 rmmod nvme_fabrics 00:21:30.196 rmmod nvme_keyring 00:21:30.196 07:14:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:30.196 07:14:14 -- nvmf/common.sh@123 -- # set -e 00:21:30.196 07:14:14 -- nvmf/common.sh@124 -- # return 0 00:21:30.196 07:14:14 -- nvmf/common.sh@477 -- # '[' -n 84001 ']' 00:21:30.196 07:14:14 -- nvmf/common.sh@478 -- # killprocess 84001 00:21:30.196 07:14:14 -- common/autotest_common.sh@926 -- # '[' -z 84001 ']' 00:21:30.196 07:14:14 -- common/autotest_common.sh@930 -- # kill -0 84001 00:21:30.196 07:14:14 -- common/autotest_common.sh@931 -- # uname 00:21:30.196 07:14:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:30.196 07:14:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84001 00:21:30.454 killing process with pid 84001 00:21:30.454 07:14:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:30.454 07:14:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:30.454 07:14:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84001' 00:21:30.454 07:14:14 -- common/autotest_common.sh@945 -- # kill 84001 00:21:30.454 07:14:14 -- common/autotest_common.sh@950 -- # wait 84001 00:21:30.712 07:14:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:30.712 07:14:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:30.712 07:14:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:30.712 07:14:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:30.712 07:14:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:30.713 07:14:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.713 07:14:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.713 07:14:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.713 07:14:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:30.713 00:21:30.713 real 0m31.951s 00:21:30.713 user 2m3.495s 00:21:30.713 sys 0m4.715s 00:21:30.713 07:14:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:30.713 ************************************ 00:21:30.713 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:21:30.713 END TEST nvmf_failover 00:21:30.713 ************************************ 00:21:30.713 07:14:14 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:30.713 07:14:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:30.713 07:14:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:30.713 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:21:30.713 ************************************ 00:21:30.713 START TEST nvmf_discovery 00:21:30.713 ************************************ 00:21:30.713 07:14:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:30.713 * Looking for test storage... 00:21:30.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:30.713 07:14:14 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:30.713 07:14:14 -- nvmf/common.sh@7 -- # uname -s 00:21:30.713 07:14:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.713 07:14:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.713 07:14:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.713 07:14:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.713 07:14:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.713 07:14:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.713 07:14:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.713 07:14:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.713 07:14:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.713 07:14:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.713 07:14:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:21:30.713 07:14:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:21:30.713 07:14:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.713 07:14:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.713 07:14:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:30.713 07:14:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:30.713 07:14:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.713 07:14:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.713 07:14:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.713 07:14:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.713 07:14:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.713 07:14:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.713 07:14:14 -- paths/export.sh@5 -- # export PATH 00:21:30.713 07:14:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.713 07:14:14 -- nvmf/common.sh@46 -- # : 0 00:21:30.713 07:14:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:30.713 07:14:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:30.713 07:14:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:30.713 07:14:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.713 07:14:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.713 07:14:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:30.713 07:14:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:30.713 07:14:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:30.713 07:14:14 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:30.713 07:14:14 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:30.713 07:14:14 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:30.713 07:14:14 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:30.713 07:14:14 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:30.713 07:14:14 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:30.713 07:14:14 -- host/discovery.sh@25 -- # nvmftestinit 00:21:30.713 07:14:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:30.713 07:14:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.713 07:14:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:30.713 07:14:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:30.713 07:14:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:30.713 07:14:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.713 07:14:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.713 07:14:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.713 07:14:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:30.713 07:14:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:30.713 07:14:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:30.713 07:14:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:30.713 07:14:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:30.713 07:14:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:30.713 07:14:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.713 07:14:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.713 07:14:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:30.713 07:14:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:30.713 07:14:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:30.713 07:14:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:30.713 07:14:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:30.713 07:14:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.713 07:14:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:30.713 07:14:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:30.713 07:14:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:30.713 07:14:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:30.713 07:14:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:30.713 07:14:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:30.713 Cannot find device "nvmf_tgt_br" 00:21:30.713 07:14:14 -- nvmf/common.sh@154 -- # true 00:21:30.713 07:14:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:30.972 Cannot find device "nvmf_tgt_br2" 00:21:30.972 07:14:14 -- nvmf/common.sh@155 -- # true 00:21:30.972 07:14:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:30.972 07:14:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:30.972 Cannot find device "nvmf_tgt_br" 00:21:30.972 07:14:14 -- nvmf/common.sh@157 -- # true 00:21:30.972 07:14:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:30.972 Cannot find device "nvmf_tgt_br2" 00:21:30.972 07:14:14 -- nvmf/common.sh@158 -- # true 00:21:30.972 07:14:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:30.972 07:14:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:30.972 07:14:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:30.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:30.972 07:14:14 -- nvmf/common.sh@161 -- # true 00:21:30.972 07:14:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:30.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:30.972 07:14:14 -- nvmf/common.sh@162 -- # true 00:21:30.972 07:14:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:30.972 07:14:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:30.972 07:14:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:30.972 07:14:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:30.972 07:14:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:30.972 07:14:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:30.972 07:14:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:30.972 07:14:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:30.972 07:14:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:30.972 07:14:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:30.972 07:14:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:30.972 07:14:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:30.972 07:14:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:30.972 07:14:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:30.972 07:14:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:30.972 07:14:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:30.972 07:14:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:30.972 07:14:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:30.972 07:14:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:31.230 07:14:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:31.230 07:14:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:31.230 07:14:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:31.230 07:14:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:31.231 07:14:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:31.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:21:31.231 00:21:31.231 --- 10.0.0.2 ping statistics --- 00:21:31.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.231 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:21:31.231 07:14:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:31.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:31.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:21:31.231 00:21:31.231 --- 10.0.0.3 ping statistics --- 00:21:31.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.231 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:31.231 07:14:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:31.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:31.231 00:21:31.231 --- 10.0.0.1 ping statistics --- 00:21:31.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.231 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:31.231 07:14:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.231 07:14:15 -- nvmf/common.sh@421 -- # return 0 00:21:31.231 07:14:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:31.231 07:14:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.231 07:14:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:31.231 07:14:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:31.231 07:14:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.231 07:14:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:31.231 07:14:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:31.231 07:14:15 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:31.231 07:14:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:31.231 07:14:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:31.231 07:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:31.231 07:14:15 -- nvmf/common.sh@469 -- # nvmfpid=84794 00:21:31.231 07:14:15 -- nvmf/common.sh@470 -- # waitforlisten 84794 00:21:31.231 07:14:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:31.231 07:14:15 -- common/autotest_common.sh@819 -- # '[' -z 84794 ']' 00:21:31.231 07:14:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.231 07:14:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:31.231 07:14:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.231 07:14:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:31.231 07:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:31.231 [2024-07-11 07:14:15.168012] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:31.231 [2024-07-11 07:14:15.168103] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.490 [2024-07-11 07:14:15.305145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.490 [2024-07-11 07:14:15.376798] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:31.490 [2024-07-11 07:14:15.376928] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.490 [2024-07-11 07:14:15.376939] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.490 [2024-07-11 07:14:15.376947] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.490 [2024-07-11 07:14:15.376971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.057 07:14:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:32.057 07:14:16 -- common/autotest_common.sh@852 -- # return 0 00:21:32.057 07:14:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:32.057 07:14:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:32.057 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:32.057 07:14:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.057 07:14:16 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:32.057 07:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:32.057 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:32.057 [2024-07-11 07:14:16.080531] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.057 07:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:32.057 07:14:16 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:32.057 07:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:32.057 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:32.057 [2024-07-11 07:14:16.088641] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:32.057 07:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:32.057 07:14:16 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:32.057 07:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:32.057 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:32.057 null0 00:21:32.057 07:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:32.057 07:14:16 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:32.057 07:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:32.057 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:32.057 null1 00:21:32.057 07:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:32.057 07:14:16 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:32.057 07:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:32.057 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:32.317 07:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:32.317 07:14:16 -- host/discovery.sh@45 -- # hostpid=84844 00:21:32.317 07:14:16 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:32.317 07:14:16 -- host/discovery.sh@46 -- # waitforlisten 84844 /tmp/host.sock 00:21:32.317 07:14:16 -- common/autotest_common.sh@819 -- # '[' -z 84844 ']' 00:21:32.317 07:14:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:21:32.317 07:14:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:32.317 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:32.317 07:14:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:32.317 07:14:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:32.317 07:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:32.317 [2024-07-11 07:14:16.178532] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:32.317 [2024-07-11 07:14:16.178619] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84844 ] 00:21:32.317 [2024-07-11 07:14:16.316055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.576 [2024-07-11 07:14:16.421218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:32.576 [2024-07-11 07:14:16.421407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.143 07:14:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:33.143 07:14:17 -- common/autotest_common.sh@852 -- # return 0 00:21:33.143 07:14:17 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:33.143 07:14:17 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:33.143 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.143 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.143 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.143 07:14:17 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:33.143 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.143 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.143 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.143 07:14:17 -- host/discovery.sh@72 -- # notify_id=0 00:21:33.143 07:14:17 -- host/discovery.sh@78 -- # get_subsystem_names 00:21:33.143 07:14:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:33.143 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.143 07:14:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:33.143 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.143 07:14:17 -- host/discovery.sh@59 -- # sort 00:21:33.143 07:14:17 -- host/discovery.sh@59 -- # xargs 00:21:33.144 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.144 07:14:17 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:21:33.144 07:14:17 -- host/discovery.sh@79 -- # get_bdev_list 00:21:33.144 07:14:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:33.144 07:14:17 -- host/discovery.sh@55 -- # sort 00:21:33.144 07:14:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:33.144 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.144 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.144 07:14:17 -- host/discovery.sh@55 -- # xargs 00:21:33.144 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.402 07:14:17 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:21:33.402 07:14:17 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:33.402 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.402 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.402 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.402 07:14:17 -- host/discovery.sh@82 -- # get_subsystem_names 00:21:33.402 07:14:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:33.402 07:14:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:33.402 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.402 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.402 07:14:17 -- host/discovery.sh@59 -- # xargs 00:21:33.402 07:14:17 -- host/discovery.sh@59 -- # sort 00:21:33.402 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.402 07:14:17 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:21:33.403 07:14:17 -- host/discovery.sh@83 -- # get_bdev_list 00:21:33.403 07:14:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:33.403 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.403 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.403 07:14:17 -- host/discovery.sh@55 -- # sort 00:21:33.403 07:14:17 -- host/discovery.sh@55 -- # xargs 00:21:33.403 07:14:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:33.403 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.403 07:14:17 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:33.403 07:14:17 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:33.403 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.403 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.403 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.403 07:14:17 -- host/discovery.sh@86 -- # get_subsystem_names 00:21:33.403 07:14:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:33.403 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.403 07:14:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:33.403 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.403 07:14:17 -- host/discovery.sh@59 -- # xargs 00:21:33.403 07:14:17 -- host/discovery.sh@59 -- # sort 00:21:33.403 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.403 07:14:17 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:21:33.403 07:14:17 -- host/discovery.sh@87 -- # get_bdev_list 00:21:33.403 07:14:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:33.403 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.403 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.403 07:14:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:33.403 07:14:17 -- host/discovery.sh@55 -- # sort 00:21:33.403 07:14:17 -- host/discovery.sh@55 -- # xargs 00:21:33.403 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.403 07:14:17 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:33.403 07:14:17 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:33.403 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.403 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.662 [2024-07-11 07:14:17.465018] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.662 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.662 07:14:17 -- host/discovery.sh@92 -- # get_subsystem_names 00:21:33.662 07:14:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:33.662 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.662 07:14:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:33.662 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.662 07:14:17 -- host/discovery.sh@59 -- # sort 00:21:33.662 07:14:17 -- host/discovery.sh@59 -- # xargs 00:21:33.662 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.662 07:14:17 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:33.662 07:14:17 -- host/discovery.sh@93 -- # get_bdev_list 00:21:33.662 07:14:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:33.662 07:14:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:33.662 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.662 07:14:17 -- host/discovery.sh@55 -- # sort 00:21:33.662 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.662 07:14:17 -- host/discovery.sh@55 -- # xargs 00:21:33.662 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.662 07:14:17 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:21:33.662 07:14:17 -- host/discovery.sh@94 -- # get_notification_count 00:21:33.662 07:14:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:33.662 07:14:17 -- host/discovery.sh@74 -- # jq '. | length' 00:21:33.662 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.662 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.662 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.662 07:14:17 -- host/discovery.sh@74 -- # notification_count=0 00:21:33.662 07:14:17 -- host/discovery.sh@75 -- # notify_id=0 00:21:33.662 07:14:17 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:21:33.662 07:14:17 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:33.662 07:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.662 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:33.662 07:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.662 07:14:17 -- host/discovery.sh@100 -- # sleep 1 00:21:34.229 [2024-07-11 07:14:18.103853] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:34.229 [2024-07-11 07:14:18.103882] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:34.229 [2024-07-11 07:14:18.103900] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:34.229 [2024-07-11 07:14:18.189967] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:34.229 [2024-07-11 07:14:18.245492] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:34.229 [2024-07-11 07:14:18.245533] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:34.795 07:14:18 -- host/discovery.sh@101 -- # get_subsystem_names 00:21:34.795 07:14:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:34.795 07:14:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:34.795 07:14:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:34.795 07:14:18 -- common/autotest_common.sh@10 -- # set +x 00:21:34.795 07:14:18 -- host/discovery.sh@59 -- # sort 00:21:34.795 07:14:18 -- host/discovery.sh@59 -- # xargs 00:21:34.795 07:14:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:34.795 07:14:18 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.795 07:14:18 -- host/discovery.sh@102 -- # get_bdev_list 00:21:34.795 07:14:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.796 07:14:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:34.796 07:14:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:34.796 07:14:18 -- common/autotest_common.sh@10 -- # set +x 00:21:34.796 07:14:18 -- host/discovery.sh@55 -- # sort 00:21:34.796 07:14:18 -- host/discovery.sh@55 -- # xargs 00:21:34.796 07:14:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:34.796 07:14:18 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:34.796 07:14:18 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:21:34.796 07:14:18 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:34.796 07:14:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:34.796 07:14:18 -- common/autotest_common.sh@10 -- # set +x 00:21:34.796 07:14:18 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:34.796 07:14:18 -- host/discovery.sh@63 -- # sort -n 00:21:34.796 07:14:18 -- host/discovery.sh@63 -- # xargs 00:21:34.796 07:14:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:34.796 07:14:18 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:21:34.796 07:14:18 -- host/discovery.sh@104 -- # get_notification_count 00:21:34.796 07:14:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:34.796 07:14:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:34.796 07:14:18 -- host/discovery.sh@74 -- # jq '. | length' 00:21:34.796 07:14:18 -- common/autotest_common.sh@10 -- # set +x 00:21:34.796 07:14:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.054 07:14:18 -- host/discovery.sh@74 -- # notification_count=1 00:21:35.054 07:14:18 -- host/discovery.sh@75 -- # notify_id=1 00:21:35.054 07:14:18 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:21:35.054 07:14:18 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:35.054 07:14:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.054 07:14:18 -- common/autotest_common.sh@10 -- # set +x 00:21:35.054 07:14:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.054 07:14:18 -- host/discovery.sh@109 -- # sleep 1 00:21:35.989 07:14:19 -- host/discovery.sh@110 -- # get_bdev_list 00:21:35.989 07:14:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.989 07:14:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.989 07:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.989 07:14:19 -- common/autotest_common.sh@10 -- # set +x 00:21:35.989 07:14:19 -- host/discovery.sh@55 -- # xargs 00:21:35.989 07:14:19 -- host/discovery.sh@55 -- # sort 00:21:35.989 07:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.989 07:14:19 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:35.989 07:14:19 -- host/discovery.sh@111 -- # get_notification_count 00:21:35.989 07:14:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:35.989 07:14:19 -- host/discovery.sh@74 -- # jq '. | length' 00:21:35.990 07:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.990 07:14:19 -- common/autotest_common.sh@10 -- # set +x 00:21:35.990 07:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.990 07:14:19 -- host/discovery.sh@74 -- # notification_count=1 00:21:35.990 07:14:19 -- host/discovery.sh@75 -- # notify_id=2 00:21:35.990 07:14:19 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:21:35.990 07:14:19 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:35.990 07:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.990 07:14:19 -- common/autotest_common.sh@10 -- # set +x 00:21:35.990 [2024-07-11 07:14:19.997988] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:35.990 [2024-07-11 07:14:19.999079] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:35.990 [2024-07-11 07:14:19.999125] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:35.990 07:14:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.990 07:14:20 -- host/discovery.sh@117 -- # sleep 1 00:21:36.248 [2024-07-11 07:14:20.085132] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:36.248 [2024-07-11 07:14:20.146386] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:36.248 [2024-07-11 07:14:20.146408] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:36.248 [2024-07-11 07:14:20.146414] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:37.182 07:14:21 -- host/discovery.sh@118 -- # get_subsystem_names 00:21:37.182 07:14:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:37.182 07:14:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:37.182 07:14:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:37.182 07:14:21 -- common/autotest_common.sh@10 -- # set +x 00:21:37.182 07:14:21 -- host/discovery.sh@59 -- # sort 00:21:37.182 07:14:21 -- host/discovery.sh@59 -- # xargs 00:21:37.182 07:14:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:37.182 07:14:21 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.182 07:14:21 -- host/discovery.sh@119 -- # get_bdev_list 00:21:37.182 07:14:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.182 07:14:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:37.182 07:14:21 -- host/discovery.sh@55 -- # sort 00:21:37.182 07:14:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:37.182 07:14:21 -- host/discovery.sh@55 -- # xargs 00:21:37.182 07:14:21 -- common/autotest_common.sh@10 -- # set +x 00:21:37.182 07:14:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:37.182 07:14:21 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:37.182 07:14:21 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:21:37.182 07:14:21 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:37.182 07:14:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:37.182 07:14:21 -- common/autotest_common.sh@10 -- # set +x 00:21:37.182 07:14:21 -- host/discovery.sh@63 -- # sort -n 00:21:37.182 07:14:21 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:37.182 07:14:21 -- host/discovery.sh@63 -- # xargs 00:21:37.182 07:14:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:37.182 07:14:21 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:37.182 07:14:21 -- host/discovery.sh@121 -- # get_notification_count 00:21:37.182 07:14:21 -- host/discovery.sh@74 -- # jq '. | length' 00:21:37.182 07:14:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:37.182 07:14:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:37.182 07:14:21 -- common/autotest_common.sh@10 -- # set +x 00:21:37.182 07:14:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:37.182 07:14:21 -- host/discovery.sh@74 -- # notification_count=0 00:21:37.182 07:14:21 -- host/discovery.sh@75 -- # notify_id=2 00:21:37.182 07:14:21 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:21:37.182 07:14:21 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:37.182 07:14:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:37.182 07:14:21 -- common/autotest_common.sh@10 -- # set +x 00:21:37.182 [2024-07-11 07:14:21.227434] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:37.182 [2024-07-11 07:14:21.227461] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:37.182 07:14:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:37.182 07:14:21 -- host/discovery.sh@127 -- # sleep 1 00:21:37.182 [2024-07-11 07:14:21.233505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.182 [2024-07-11 07:14:21.233540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.182 [2024-07-11 07:14:21.233569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.182 [2024-07-11 07:14:21.233577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.182 [2024-07-11 07:14:21.233586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.182 [2024-07-11 07:14:21.233594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.182 [2024-07-11 07:14:21.233603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.182 [2024-07-11 07:14:21.233611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.182 [2024-07-11 07:14:21.233619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d30bd0 is same with the state(5) to be set 00:21:37.440 [2024-07-11 07:14:21.243447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d30bd0 (9): Bad file descriptor 00:21:37.440 [2024-07-11 07:14:21.253469] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.441 [2024-07-11 07:14:21.253558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.253603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.253618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d30bd0 with addr=10.0.0.2, port=4420 00:21:37.441 [2024-07-11 07:14:21.253627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d30bd0 is same with the state(5) to be set 00:21:37.441 [2024-07-11 07:14:21.253642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d30bd0 (9): Bad file descriptor 00:21:37.441 [2024-07-11 07:14:21.253654] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.441 [2024-07-11 07:14:21.253662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.441 [2024-07-11 07:14:21.253671] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.441 [2024-07-11 07:14:21.253684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.441 [2024-07-11 07:14:21.263518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.441 [2024-07-11 07:14:21.263588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.263628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.263642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d30bd0 with addr=10.0.0.2, port=4420 00:21:37.441 [2024-07-11 07:14:21.263652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d30bd0 is same with the state(5) to be set 00:21:37.441 [2024-07-11 07:14:21.263665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d30bd0 (9): Bad file descriptor 00:21:37.441 [2024-07-11 07:14:21.263676] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.441 [2024-07-11 07:14:21.263684] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.441 [2024-07-11 07:14:21.263691] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.441 [2024-07-11 07:14:21.263704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.441 [2024-07-11 07:14:21.273562] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.441 [2024-07-11 07:14:21.273638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.273679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.273694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d30bd0 with addr=10.0.0.2, port=4420 00:21:37.441 [2024-07-11 07:14:21.273703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d30bd0 is same with the state(5) to be set 00:21:37.441 [2024-07-11 07:14:21.273716] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d30bd0 (9): Bad file descriptor 00:21:37.441 [2024-07-11 07:14:21.273728] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.441 [2024-07-11 07:14:21.273736] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.441 [2024-07-11 07:14:21.273743] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.441 [2024-07-11 07:14:21.273756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.441 [2024-07-11 07:14:21.283611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.441 [2024-07-11 07:14:21.283697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.283738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.283753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d30bd0 with addr=10.0.0.2, port=4420 00:21:37.441 [2024-07-11 07:14:21.283762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d30bd0 is same with the state(5) to be set 00:21:37.441 [2024-07-11 07:14:21.283776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d30bd0 (9): Bad file descriptor 00:21:37.441 [2024-07-11 07:14:21.283797] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.441 [2024-07-11 07:14:21.283806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.441 [2024-07-11 07:14:21.283814] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.441 [2024-07-11 07:14:21.283826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.441 [2024-07-11 07:14:21.293669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.441 [2024-07-11 07:14:21.293737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.293775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.293789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d30bd0 with addr=10.0.0.2, port=4420 00:21:37.441 [2024-07-11 07:14:21.293798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d30bd0 is same with the state(5) to be set 00:21:37.441 [2024-07-11 07:14:21.293811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d30bd0 (9): Bad file descriptor 00:21:37.441 [2024-07-11 07:14:21.293823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.441 [2024-07-11 07:14:21.293830] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.441 [2024-07-11 07:14:21.293838] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.441 [2024-07-11 07:14:21.293850] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.441 [2024-07-11 07:14:21.303712] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.441 [2024-07-11 07:14:21.303780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.303819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.303833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d30bd0 with addr=10.0.0.2, port=4420 00:21:37.441 [2024-07-11 07:14:21.303841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d30bd0 is same with the state(5) to be set 00:21:37.441 [2024-07-11 07:14:21.303855] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d30bd0 (9): Bad file descriptor 00:21:37.441 [2024-07-11 07:14:21.303874] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.441 [2024-07-11 07:14:21.303883] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.441 [2024-07-11 07:14:21.303890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.441 [2024-07-11 07:14:21.303903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.441 [2024-07-11 07:14:21.313753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.441 [2024-07-11 07:14:21.313820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.313859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.441 [2024-07-11 07:14:21.313872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d30bd0 with addr=10.0.0.2, port=4420 00:21:37.441 [2024-07-11 07:14:21.313881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d30bd0 is same with the state(5) to be set 00:21:37.441 [2024-07-11 07:14:21.313894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d30bd0 (9): Bad file descriptor 00:21:37.441 [2024-07-11 07:14:21.313905] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:37.441 [2024-07-11 07:14:21.313912] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:37.441 [2024-07-11 07:14:21.313920] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:37.441 [2024-07-11 07:14:21.313932] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.441 [2024-07-11 07:14:21.315510] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:37.441 [2024-07-11 07:14:21.315534] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:38.376 07:14:22 -- host/discovery.sh@128 -- # get_subsystem_names 00:21:38.376 07:14:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:38.376 07:14:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.376 07:14:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:38.376 07:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:38.376 07:14:22 -- host/discovery.sh@59 -- # sort 00:21:38.376 07:14:22 -- host/discovery.sh@59 -- # xargs 00:21:38.376 07:14:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.376 07:14:22 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.376 07:14:22 -- host/discovery.sh@129 -- # get_bdev_list 00:21:38.376 07:14:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.376 07:14:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.376 07:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:38.376 07:14:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.376 07:14:22 -- host/discovery.sh@55 -- # sort 00:21:38.376 07:14:22 -- host/discovery.sh@55 -- # xargs 00:21:38.376 07:14:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.376 07:14:22 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:38.376 07:14:22 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:21:38.376 07:14:22 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:38.376 07:14:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.376 07:14:22 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:38.376 07:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:38.376 07:14:22 -- host/discovery.sh@63 -- # sort -n 00:21:38.376 07:14:22 -- host/discovery.sh@63 -- # xargs 00:21:38.376 07:14:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.376 07:14:22 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:21:38.376 07:14:22 -- host/discovery.sh@131 -- # get_notification_count 00:21:38.376 07:14:22 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:38.376 07:14:22 -- host/discovery.sh@74 -- # jq '. | length' 00:21:38.376 07:14:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.376 07:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:38.376 07:14:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.635 07:14:22 -- host/discovery.sh@74 -- # notification_count=0 00:21:38.635 07:14:22 -- host/discovery.sh@75 -- # notify_id=2 00:21:38.635 07:14:22 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:21:38.635 07:14:22 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:38.635 07:14:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.635 07:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:38.635 07:14:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.635 07:14:22 -- host/discovery.sh@135 -- # sleep 1 00:21:39.570 07:14:23 -- host/discovery.sh@136 -- # get_subsystem_names 00:21:39.570 07:14:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:39.570 07:14:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:39.570 07:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:39.570 07:14:23 -- host/discovery.sh@59 -- # sort 00:21:39.570 07:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:39.570 07:14:23 -- host/discovery.sh@59 -- # xargs 00:21:39.570 07:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:39.570 07:14:23 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:21:39.570 07:14:23 -- host/discovery.sh@137 -- # get_bdev_list 00:21:39.570 07:14:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.570 07:14:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.570 07:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:39.570 07:14:23 -- host/discovery.sh@55 -- # sort 00:21:39.570 07:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:39.570 07:14:23 -- host/discovery.sh@55 -- # xargs 00:21:39.570 07:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:39.570 07:14:23 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:21:39.570 07:14:23 -- host/discovery.sh@138 -- # get_notification_count 00:21:39.570 07:14:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:39.570 07:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:39.570 07:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:39.570 07:14:23 -- host/discovery.sh@74 -- # jq '. | length' 00:21:39.570 07:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:39.829 07:14:23 -- host/discovery.sh@74 -- # notification_count=2 00:21:39.829 07:14:23 -- host/discovery.sh@75 -- # notify_id=4 00:21:39.829 07:14:23 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:21:39.829 07:14:23 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:39.829 07:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:39.829 07:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:40.823 [2024-07-11 07:14:24.644247] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:40.823 [2024-07-11 07:14:24.644270] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:40.823 [2024-07-11 07:14:24.644317] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:40.823 [2024-07-11 07:14:24.730359] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:40.823 [2024-07-11 07:14:24.789596] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:40.823 [2024-07-11 07:14:24.789634] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:40.823 07:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.823 07:14:24 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:40.823 07:14:24 -- common/autotest_common.sh@640 -- # local es=0 00:21:40.823 07:14:24 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:40.823 07:14:24 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:40.823 07:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.823 07:14:24 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:40.823 07:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.823 07:14:24 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:40.823 07:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.823 07:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:40.823 2024/07/11 07:14:24 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:40.823 request: 00:21:40.823 { 00:21:40.823 "method": "bdev_nvme_start_discovery", 00:21:40.823 "params": { 00:21:40.823 "name": "nvme", 00:21:40.823 "trtype": "tcp", 00:21:40.823 "traddr": "10.0.0.2", 00:21:40.823 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:40.823 "adrfam": "ipv4", 00:21:40.823 "trsvcid": "8009", 00:21:40.823 "wait_for_attach": true 00:21:40.823 } 00:21:40.823 } 00:21:40.823 Got JSON-RPC error response 00:21:40.823 GoRPCClient: error on JSON-RPC call 00:21:40.823 07:14:24 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:40.823 07:14:24 -- common/autotest_common.sh@643 -- # es=1 00:21:40.823 07:14:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:40.823 07:14:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:40.823 07:14:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:40.823 07:14:24 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:21:40.823 07:14:24 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:40.823 07:14:24 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:40.823 07:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.823 07:14:24 -- host/discovery.sh@67 -- # sort 00:21:40.823 07:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:40.823 07:14:24 -- host/discovery.sh@67 -- # xargs 00:21:40.823 07:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.823 07:14:24 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:21:40.823 07:14:24 -- host/discovery.sh@147 -- # get_bdev_list 00:21:40.823 07:14:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:40.823 07:14:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:40.823 07:14:24 -- host/discovery.sh@55 -- # sort 00:21:40.823 07:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.823 07:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:40.823 07:14:24 -- host/discovery.sh@55 -- # xargs 00:21:41.081 07:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.081 07:14:24 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:41.081 07:14:24 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.081 07:14:24 -- common/autotest_common.sh@640 -- # local es=0 00:21:41.081 07:14:24 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.081 07:14:24 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:41.081 07:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:41.081 07:14:24 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:41.081 07:14:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:41.081 07:14:24 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.081 07:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.081 07:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:41.081 2024/07/11 07:14:24 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:41.081 request: 00:21:41.081 { 00:21:41.081 "method": "bdev_nvme_start_discovery", 00:21:41.081 "params": { 00:21:41.081 "name": "nvme_second", 00:21:41.081 "trtype": "tcp", 00:21:41.081 "traddr": "10.0.0.2", 00:21:41.081 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:41.081 "adrfam": "ipv4", 00:21:41.081 "trsvcid": "8009", 00:21:41.081 "wait_for_attach": true 00:21:41.081 } 00:21:41.081 } 00:21:41.081 Got JSON-RPC error response 00:21:41.081 GoRPCClient: error on JSON-RPC call 00:21:41.081 07:14:24 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:41.081 07:14:24 -- common/autotest_common.sh@643 -- # es=1 00:21:41.081 07:14:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:41.081 07:14:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:41.081 07:14:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:41.081 07:14:24 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:21:41.081 07:14:24 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:41.081 07:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.081 07:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:41.081 07:14:24 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:41.081 07:14:24 -- host/discovery.sh@67 -- # sort 00:21:41.081 07:14:24 -- host/discovery.sh@67 -- # xargs 00:21:41.081 07:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.081 07:14:24 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:21:41.081 07:14:24 -- host/discovery.sh@153 -- # get_bdev_list 00:21:41.081 07:14:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.081 07:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.081 07:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:41.081 07:14:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:41.081 07:14:24 -- host/discovery.sh@55 -- # xargs 00:21:41.081 07:14:24 -- host/discovery.sh@55 -- # sort 00:21:41.081 07:14:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.081 07:14:25 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:41.081 07:14:25 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:41.081 07:14:25 -- common/autotest_common.sh@640 -- # local es=0 00:21:41.081 07:14:25 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:41.081 07:14:25 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:41.081 07:14:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:41.081 07:14:25 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:41.081 07:14:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:41.081 07:14:25 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:41.081 07:14:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.081 07:14:25 -- common/autotest_common.sh@10 -- # set +x 00:21:42.017 [2024-07-11 07:14:26.055525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.017 [2024-07-11 07:14:26.055597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.017 [2024-07-11 07:14:26.055614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d88410 with addr=10.0.0.2, port=8010 00:21:42.017 [2024-07-11 07:14:26.055629] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:42.017 [2024-07-11 07:14:26.055637] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:42.017 [2024-07-11 07:14:26.055645] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:43.393 [2024-07-11 07:14:27.055502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.393 [2024-07-11 07:14:27.055584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.393 [2024-07-11 07:14:27.055602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d88410 with addr=10.0.0.2, port=8010 00:21:43.393 [2024-07-11 07:14:27.055615] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:43.393 [2024-07-11 07:14:27.055623] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:43.393 [2024-07-11 07:14:27.055631] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:44.329 [2024-07-11 07:14:28.055430] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:44.329 2024/07/11 07:14:28 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:44.329 request: 00:21:44.329 { 00:21:44.329 "method": "bdev_nvme_start_discovery", 00:21:44.329 "params": { 00:21:44.329 "name": "nvme_second", 00:21:44.329 "trtype": "tcp", 00:21:44.329 "traddr": "10.0.0.2", 00:21:44.329 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:44.329 "adrfam": "ipv4", 00:21:44.329 "trsvcid": "8010", 00:21:44.329 "attach_timeout_ms": 3000 00:21:44.329 } 00:21:44.329 } 00:21:44.329 Got JSON-RPC error response 00:21:44.329 GoRPCClient: error on JSON-RPC call 00:21:44.329 07:14:28 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:44.329 07:14:28 -- common/autotest_common.sh@643 -- # es=1 00:21:44.329 07:14:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:44.329 07:14:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:44.329 07:14:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:44.329 07:14:28 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:21:44.329 07:14:28 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:44.329 07:14:28 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:44.329 07:14:28 -- host/discovery.sh@67 -- # xargs 00:21:44.329 07:14:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:44.329 07:14:28 -- host/discovery.sh@67 -- # sort 00:21:44.329 07:14:28 -- common/autotest_common.sh@10 -- # set +x 00:21:44.329 07:14:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:44.329 07:14:28 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:21:44.329 07:14:28 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:21:44.329 07:14:28 -- host/discovery.sh@162 -- # kill 84844 00:21:44.329 07:14:28 -- host/discovery.sh@163 -- # nvmftestfini 00:21:44.329 07:14:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:44.329 07:14:28 -- nvmf/common.sh@116 -- # sync 00:21:44.329 07:14:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:44.329 07:14:28 -- nvmf/common.sh@119 -- # set +e 00:21:44.329 07:14:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:44.329 07:14:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:44.329 rmmod nvme_tcp 00:21:44.329 rmmod nvme_fabrics 00:21:44.329 rmmod nvme_keyring 00:21:44.329 07:14:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:44.329 07:14:28 -- nvmf/common.sh@123 -- # set -e 00:21:44.329 07:14:28 -- nvmf/common.sh@124 -- # return 0 00:21:44.329 07:14:28 -- nvmf/common.sh@477 -- # '[' -n 84794 ']' 00:21:44.329 07:14:28 -- nvmf/common.sh@478 -- # killprocess 84794 00:21:44.329 07:14:28 -- common/autotest_common.sh@926 -- # '[' -z 84794 ']' 00:21:44.329 07:14:28 -- common/autotest_common.sh@930 -- # kill -0 84794 00:21:44.329 07:14:28 -- common/autotest_common.sh@931 -- # uname 00:21:44.329 07:14:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:44.329 07:14:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84794 00:21:44.329 killing process with pid 84794 00:21:44.329 07:14:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:44.329 07:14:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:44.329 07:14:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84794' 00:21:44.329 07:14:28 -- common/autotest_common.sh@945 -- # kill 84794 00:21:44.329 07:14:28 -- common/autotest_common.sh@950 -- # wait 84794 00:21:44.588 07:14:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:44.588 07:14:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:44.588 07:14:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:44.588 07:14:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.588 07:14:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:44.588 07:14:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.588 07:14:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.588 07:14:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.588 07:14:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:44.588 00:21:44.588 real 0m13.983s 00:21:44.588 user 0m27.217s 00:21:44.588 sys 0m1.700s 00:21:44.588 07:14:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:44.588 07:14:28 -- common/autotest_common.sh@10 -- # set +x 00:21:44.588 ************************************ 00:21:44.588 END TEST nvmf_discovery 00:21:44.588 ************************************ 00:21:44.847 07:14:28 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:44.847 07:14:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:44.847 07:14:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:44.847 07:14:28 -- common/autotest_common.sh@10 -- # set +x 00:21:44.847 ************************************ 00:21:44.847 START TEST nvmf_discovery_remove_ifc 00:21:44.847 ************************************ 00:21:44.847 07:14:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:44.847 * Looking for test storage... 00:21:44.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:44.847 07:14:28 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:44.847 07:14:28 -- nvmf/common.sh@7 -- # uname -s 00:21:44.847 07:14:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.847 07:14:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.847 07:14:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.847 07:14:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.847 07:14:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.847 07:14:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.847 07:14:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.847 07:14:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.847 07:14:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.847 07:14:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.847 07:14:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:21:44.847 07:14:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:21:44.847 07:14:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.847 07:14:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.847 07:14:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:44.847 07:14:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:44.847 07:14:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.847 07:14:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.847 07:14:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.847 07:14:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.847 07:14:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.847 07:14:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.847 07:14:28 -- paths/export.sh@5 -- # export PATH 00:21:44.847 07:14:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.847 07:14:28 -- nvmf/common.sh@46 -- # : 0 00:21:44.847 07:14:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:44.847 07:14:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:44.847 07:14:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:44.847 07:14:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.847 07:14:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.847 07:14:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:44.847 07:14:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:44.847 07:14:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:44.847 07:14:28 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:44.847 07:14:28 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:44.847 07:14:28 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:44.847 07:14:28 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:44.847 07:14:28 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:44.847 07:14:28 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:44.847 07:14:28 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:44.847 07:14:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:44.847 07:14:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.847 07:14:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:44.848 07:14:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:44.848 07:14:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:44.848 07:14:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.848 07:14:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.848 07:14:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.848 07:14:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:44.848 07:14:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:44.848 07:14:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:44.848 07:14:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:44.848 07:14:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:44.848 07:14:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:44.848 07:14:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.848 07:14:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.848 07:14:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:44.848 07:14:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:44.848 07:14:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:44.848 07:14:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:44.848 07:14:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:44.848 07:14:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.848 07:14:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:44.848 07:14:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:44.848 07:14:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:44.848 07:14:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:44.848 07:14:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:44.848 07:14:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:44.848 Cannot find device "nvmf_tgt_br" 00:21:44.848 07:14:28 -- nvmf/common.sh@154 -- # true 00:21:44.848 07:14:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:44.848 Cannot find device "nvmf_tgt_br2" 00:21:44.848 07:14:28 -- nvmf/common.sh@155 -- # true 00:21:44.848 07:14:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:44.848 07:14:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:44.848 Cannot find device "nvmf_tgt_br" 00:21:44.848 07:14:28 -- nvmf/common.sh@157 -- # true 00:21:44.848 07:14:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:44.848 Cannot find device "nvmf_tgt_br2" 00:21:44.848 07:14:28 -- nvmf/common.sh@158 -- # true 00:21:44.848 07:14:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:44.848 07:14:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:44.848 07:14:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:44.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.848 07:14:28 -- nvmf/common.sh@161 -- # true 00:21:44.848 07:14:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:44.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.848 07:14:28 -- nvmf/common.sh@162 -- # true 00:21:44.848 07:14:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:44.848 07:14:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:45.107 07:14:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:45.107 07:14:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:45.107 07:14:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:45.107 07:14:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:45.107 07:14:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:45.107 07:14:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:45.107 07:14:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:45.107 07:14:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:45.107 07:14:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:45.107 07:14:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:45.107 07:14:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:45.107 07:14:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:45.107 07:14:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:45.107 07:14:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:45.107 07:14:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:45.107 07:14:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:45.107 07:14:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:45.107 07:14:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:45.107 07:14:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:45.107 07:14:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:45.107 07:14:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:45.107 07:14:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:45.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:21:45.107 00:21:45.107 --- 10.0.0.2 ping statistics --- 00:21:45.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.107 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:45.107 07:14:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:45.107 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:45.107 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:21:45.107 00:21:45.107 --- 10.0.0.3 ping statistics --- 00:21:45.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.107 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:45.107 07:14:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:45.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:21:45.107 00:21:45.107 --- 10.0.0.1 ping statistics --- 00:21:45.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.107 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:45.107 07:14:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.107 07:14:29 -- nvmf/common.sh@421 -- # return 0 00:21:45.107 07:14:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:45.107 07:14:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.107 07:14:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:45.107 07:14:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:45.107 07:14:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.107 07:14:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:45.107 07:14:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:45.107 07:14:29 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:45.107 07:14:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:45.107 07:14:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:45.107 07:14:29 -- common/autotest_common.sh@10 -- # set +x 00:21:45.107 07:14:29 -- nvmf/common.sh@469 -- # nvmfpid=85347 00:21:45.107 07:14:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:45.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.107 07:14:29 -- nvmf/common.sh@470 -- # waitforlisten 85347 00:21:45.107 07:14:29 -- common/autotest_common.sh@819 -- # '[' -z 85347 ']' 00:21:45.107 07:14:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.107 07:14:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:45.107 07:14:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.107 07:14:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:45.107 07:14:29 -- common/autotest_common.sh@10 -- # set +x 00:21:45.366 [2024-07-11 07:14:29.192987] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:45.366 [2024-07-11 07:14:29.193263] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.366 [2024-07-11 07:14:29.334776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.624 [2024-07-11 07:14:29.439632] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:45.625 [2024-07-11 07:14:29.440070] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.625 [2024-07-11 07:14:29.440227] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.625 [2024-07-11 07:14:29.440521] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.625 [2024-07-11 07:14:29.440692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.193 07:14:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:46.193 07:14:30 -- common/autotest_common.sh@852 -- # return 0 00:21:46.193 07:14:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:46.193 07:14:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:46.193 07:14:30 -- common/autotest_common.sh@10 -- # set +x 00:21:46.193 07:14:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.193 07:14:30 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:46.193 07:14:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:46.193 07:14:30 -- common/autotest_common.sh@10 -- # set +x 00:21:46.193 [2024-07-11 07:14:30.233007] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.193 [2024-07-11 07:14:30.241112] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:46.452 null0 00:21:46.452 [2024-07-11 07:14:30.273062] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.452 07:14:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:46.452 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:46.452 07:14:30 -- host/discovery_remove_ifc.sh@59 -- # hostpid=85397 00:21:46.452 07:14:30 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:46.452 07:14:30 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 85397 /tmp/host.sock 00:21:46.452 07:14:30 -- common/autotest_common.sh@819 -- # '[' -z 85397 ']' 00:21:46.452 07:14:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:21:46.452 07:14:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:46.452 07:14:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:46.452 07:14:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:46.452 07:14:30 -- common/autotest_common.sh@10 -- # set +x 00:21:46.452 [2024-07-11 07:14:30.356385] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:46.452 [2024-07-11 07:14:30.356914] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85397 ] 00:21:46.452 [2024-07-11 07:14:30.493543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.712 [2024-07-11 07:14:30.567117] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:46.712 [2024-07-11 07:14:30.567619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.280 07:14:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:47.280 07:14:31 -- common/autotest_common.sh@852 -- # return 0 00:21:47.280 07:14:31 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:47.280 07:14:31 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:47.280 07:14:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.280 07:14:31 -- common/autotest_common.sh@10 -- # set +x 00:21:47.280 07:14:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.280 07:14:31 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:47.280 07:14:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.280 07:14:31 -- common/autotest_common.sh@10 -- # set +x 00:21:47.280 07:14:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.280 07:14:31 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:47.280 07:14:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.280 07:14:31 -- common/autotest_common.sh@10 -- # set +x 00:21:48.657 [2024-07-11 07:14:32.316368] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:48.657 [2024-07-11 07:14:32.316395] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:48.657 [2024-07-11 07:14:32.316413] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.657 [2024-07-11 07:14:32.404479] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:48.657 [2024-07-11 07:14:32.466999] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:48.657 [2024-07-11 07:14:32.467042] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:48.657 [2024-07-11 07:14:32.467067] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:48.657 [2024-07-11 07:14:32.467082] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:48.657 [2024-07-11 07:14:32.467103] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:48.657 07:14:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.657 07:14:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:48.657 07:14:32 -- common/autotest_common.sh@10 -- # set +x 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:48.657 [2024-07-11 07:14:32.474853] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1034330 was disconnected and freed. delete nvme_qpair. 00:21:48.657 07:14:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.657 07:14:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.657 07:14:32 -- common/autotest_common.sh@10 -- # set +x 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:48.657 07:14:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:48.658 07:14:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:48.658 07:14:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.658 07:14:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:48.658 07:14:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:49.602 07:14:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:49.602 07:14:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.602 07:14:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:49.602 07:14:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.602 07:14:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:49.602 07:14:33 -- common/autotest_common.sh@10 -- # set +x 00:21:49.602 07:14:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:49.602 07:14:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.602 07:14:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:49.602 07:14:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:50.978 07:14:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:50.978 07:14:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.978 07:14:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.978 07:14:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:50.978 07:14:34 -- common/autotest_common.sh@10 -- # set +x 00:21:50.978 07:14:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:50.978 07:14:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:50.978 07:14:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.978 07:14:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:50.978 07:14:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:51.913 07:14:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:51.913 07:14:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.913 07:14:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:51.913 07:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.913 07:14:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:51.913 07:14:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:51.913 07:14:35 -- common/autotest_common.sh@10 -- # set +x 00:21:51.913 07:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.913 07:14:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:51.913 07:14:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:52.849 07:14:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:52.849 07:14:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.849 07:14:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.849 07:14:36 -- common/autotest_common.sh@10 -- # set +x 00:21:52.849 07:14:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:52.849 07:14:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:52.849 07:14:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:52.849 07:14:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.849 07:14:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:52.849 07:14:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:53.784 07:14:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:53.784 07:14:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.784 07:14:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:53.784 07:14:37 -- common/autotest_common.sh@10 -- # set +x 00:21:53.784 07:14:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:53.784 07:14:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:53.784 07:14:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:54.042 07:14:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:54.042 [2024-07-11 07:14:37.895444] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:54.042 [2024-07-11 07:14:37.895535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.042 [2024-07-11 07:14:37.895551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.042 [2024-07-11 07:14:37.895562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.042 [2024-07-11 07:14:37.895570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.042 [2024-07-11 07:14:37.895579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.043 [2024-07-11 07:14:37.895588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.043 [2024-07-11 07:14:37.895596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.043 [2024-07-11 07:14:37.895604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.043 [2024-07-11 07:14:37.895613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.043 [2024-07-11 07:14:37.895620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.043 [2024-07-11 07:14:37.895629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffdc40 is same with the state(5) to be set 00:21:54.043 07:14:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:54.043 07:14:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:54.043 [2024-07-11 07:14:37.905440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffdc40 (9): Bad file descriptor 00:21:54.043 [2024-07-11 07:14:37.915474] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:54.978 07:14:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:54.978 07:14:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:54.978 07:14:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:54.978 07:14:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:54.978 07:14:38 -- common/autotest_common.sh@10 -- # set +x 00:21:54.978 07:14:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:54.978 07:14:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:54.978 [2024-07-11 07:14:38.919574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:55.911 [2024-07-11 07:14:39.944564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:55.911 [2024-07-11 07:14:39.944913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xffdc40 with addr=10.0.0.2, port=4420 00:21:55.911 [2024-07-11 07:14:39.945173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffdc40 is same with the state(5) to be set 00:21:55.911 [2024-07-11 07:14:39.945230] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:55.911 [2024-07-11 07:14:39.945254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:55.911 [2024-07-11 07:14:39.945273] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:55.911 [2024-07-11 07:14:39.945293] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:21:55.911 [2024-07-11 07:14:39.946381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffdc40 (9): Bad file descriptor 00:21:55.911 [2024-07-11 07:14:39.946505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.911 [2024-07-11 07:14:39.946566] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:21:55.911 [2024-07-11 07:14:39.946635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.911 [2024-07-11 07:14:39.946665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.911 [2024-07-11 07:14:39.946691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.911 [2024-07-11 07:14:39.946712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.911 [2024-07-11 07:14:39.946733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.911 [2024-07-11 07:14:39.946754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.911 [2024-07-11 07:14:39.946776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.911 [2024-07-11 07:14:39.946796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.911 [2024-07-11 07:14:39.946824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:55.911 [2024-07-11 07:14:39.946844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.911 [2024-07-11 07:14:39.946864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:55.911 [2024-07-11 07:14:39.946897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa51c0 (9): Bad file descriptor 00:21:55.911 [2024-07-11 07:14:39.947532] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:55.911 [2024-07-11 07:14:39.947564] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:55.911 07:14:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.170 07:14:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:56.170 07:14:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:57.104 07:14:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:57.104 07:14:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:57.104 07:14:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.104 07:14:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:57.104 07:14:40 -- common/autotest_common.sh@10 -- # set +x 00:21:57.104 07:14:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:57.104 07:14:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:57.104 07:14:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.104 07:14:41 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:57.104 07:14:41 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:57.104 07:14:41 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:57.104 07:14:41 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:57.104 07:14:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:57.104 07:14:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:57.104 07:14:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:57.104 07:14:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:57.104 07:14:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:57.104 07:14:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.104 07:14:41 -- common/autotest_common.sh@10 -- # set +x 00:21:57.104 07:14:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.104 07:14:41 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:57.104 07:14:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:58.040 [2024-07-11 07:14:41.957726] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:58.040 [2024-07-11 07:14:41.957761] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:58.040 [2024-07-11 07:14:41.957781] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:58.040 [2024-07-11 07:14:42.043824] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:21:58.300 [2024-07-11 07:14:42.099024] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:58.300 [2024-07-11 07:14:42.099077] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:58.300 [2024-07-11 07:14:42.099105] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:58.300 [2024-07-11 07:14:42.099122] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:21:58.300 [2024-07-11 07:14:42.099132] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:58.300 [2024-07-11 07:14:42.106242] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xfee6f0 was disconnected and freed. delete nvme_qpair. 00:21:58.300 07:14:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:58.300 07:14:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.300 07:14:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:58.300 07:14:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.300 07:14:42 -- common/autotest_common.sh@10 -- # set +x 00:21:58.300 07:14:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:58.300 07:14:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:58.300 07:14:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.300 07:14:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:58.300 07:14:42 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:58.300 07:14:42 -- host/discovery_remove_ifc.sh@90 -- # killprocess 85397 00:21:58.300 07:14:42 -- common/autotest_common.sh@926 -- # '[' -z 85397 ']' 00:21:58.300 07:14:42 -- common/autotest_common.sh@930 -- # kill -0 85397 00:21:58.300 07:14:42 -- common/autotest_common.sh@931 -- # uname 00:21:58.300 07:14:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:58.300 07:14:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85397 00:21:58.300 killing process with pid 85397 00:21:58.300 07:14:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:58.300 07:14:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:58.300 07:14:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85397' 00:21:58.300 07:14:42 -- common/autotest_common.sh@945 -- # kill 85397 00:21:58.300 07:14:42 -- common/autotest_common.sh@950 -- # wait 85397 00:21:58.558 07:14:42 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:58.558 07:14:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:58.558 07:14:42 -- nvmf/common.sh@116 -- # sync 00:21:58.558 07:14:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:58.558 07:14:42 -- nvmf/common.sh@119 -- # set +e 00:21:58.558 07:14:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:58.558 07:14:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:58.558 rmmod nvme_tcp 00:21:58.558 rmmod nvme_fabrics 00:21:58.558 rmmod nvme_keyring 00:21:58.558 07:14:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:58.558 07:14:42 -- nvmf/common.sh@123 -- # set -e 00:21:58.558 07:14:42 -- nvmf/common.sh@124 -- # return 0 00:21:58.558 07:14:42 -- nvmf/common.sh@477 -- # '[' -n 85347 ']' 00:21:58.558 07:14:42 -- nvmf/common.sh@478 -- # killprocess 85347 00:21:58.559 07:14:42 -- common/autotest_common.sh@926 -- # '[' -z 85347 ']' 00:21:58.559 07:14:42 -- common/autotest_common.sh@930 -- # kill -0 85347 00:21:58.559 07:14:42 -- common/autotest_common.sh@931 -- # uname 00:21:58.559 07:14:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:58.559 07:14:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85347 00:21:58.817 killing process with pid 85347 00:21:58.817 07:14:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:58.817 07:14:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:58.817 07:14:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85347' 00:21:58.817 07:14:42 -- common/autotest_common.sh@945 -- # kill 85347 00:21:58.817 07:14:42 -- common/autotest_common.sh@950 -- # wait 85347 00:21:58.817 07:14:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:58.817 07:14:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:58.817 07:14:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:58.817 07:14:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.817 07:14:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:58.817 07:14:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.817 07:14:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.817 07:14:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.076 07:14:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:59.076 00:21:59.076 real 0m14.247s 00:21:59.076 user 0m24.309s 00:21:59.076 sys 0m1.598s 00:21:59.076 07:14:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.076 ************************************ 00:21:59.076 END TEST nvmf_discovery_remove_ifc 00:21:59.076 ************************************ 00:21:59.076 07:14:42 -- common/autotest_common.sh@10 -- # set +x 00:21:59.076 07:14:42 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:21:59.076 07:14:42 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:59.076 07:14:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:59.076 07:14:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:59.076 07:14:42 -- common/autotest_common.sh@10 -- # set +x 00:21:59.076 ************************************ 00:21:59.076 START TEST nvmf_digest 00:21:59.076 ************************************ 00:21:59.076 07:14:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:59.076 * Looking for test storage... 00:21:59.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:59.076 07:14:43 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.076 07:14:43 -- nvmf/common.sh@7 -- # uname -s 00:21:59.076 07:14:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.076 07:14:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.076 07:14:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.076 07:14:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.076 07:14:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.076 07:14:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.076 07:14:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.076 07:14:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.076 07:14:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.076 07:14:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.076 07:14:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:21:59.076 07:14:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:21:59.076 07:14:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.076 07:14:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.076 07:14:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.076 07:14:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.076 07:14:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.076 07:14:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.076 07:14:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.076 07:14:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.076 07:14:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.076 07:14:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.076 07:14:43 -- paths/export.sh@5 -- # export PATH 00:21:59.076 07:14:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.076 07:14:43 -- nvmf/common.sh@46 -- # : 0 00:21:59.076 07:14:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:59.076 07:14:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:59.076 07:14:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:59.076 07:14:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.076 07:14:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.076 07:14:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:59.076 07:14:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:59.076 07:14:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:59.076 07:14:43 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:59.076 07:14:43 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:59.076 07:14:43 -- host/digest.sh@16 -- # runtime=2 00:21:59.076 07:14:43 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:21:59.076 07:14:43 -- host/digest.sh@132 -- # nvmftestinit 00:21:59.076 07:14:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:59.076 07:14:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.076 07:14:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:59.076 07:14:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:59.076 07:14:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:59.076 07:14:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.076 07:14:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.076 07:14:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.076 07:14:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:59.076 07:14:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:59.076 07:14:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:59.076 07:14:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:59.076 07:14:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:59.076 07:14:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:59.076 07:14:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.076 07:14:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.076 07:14:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:59.076 07:14:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:59.076 07:14:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.076 07:14:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.076 07:14:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.076 07:14:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.076 07:14:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.076 07:14:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.076 07:14:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.076 07:14:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.076 07:14:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:59.076 07:14:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:59.076 Cannot find device "nvmf_tgt_br" 00:21:59.076 07:14:43 -- nvmf/common.sh@154 -- # true 00:21:59.076 07:14:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.076 Cannot find device "nvmf_tgt_br2" 00:21:59.076 07:14:43 -- nvmf/common.sh@155 -- # true 00:21:59.076 07:14:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:59.076 07:14:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:59.335 Cannot find device "nvmf_tgt_br" 00:21:59.335 07:14:43 -- nvmf/common.sh@157 -- # true 00:21:59.335 07:14:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:59.335 Cannot find device "nvmf_tgt_br2" 00:21:59.335 07:14:43 -- nvmf/common.sh@158 -- # true 00:21:59.335 07:14:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:59.335 07:14:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:59.335 07:14:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.335 07:14:43 -- nvmf/common.sh@161 -- # true 00:21:59.335 07:14:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.335 07:14:43 -- nvmf/common.sh@162 -- # true 00:21:59.335 07:14:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.335 07:14:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.335 07:14:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.335 07:14:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.335 07:14:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.335 07:14:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.335 07:14:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.335 07:14:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:59.335 07:14:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:59.335 07:14:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:59.335 07:14:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:59.335 07:14:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:59.335 07:14:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:59.335 07:14:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.335 07:14:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.335 07:14:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.335 07:14:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:59.335 07:14:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:59.335 07:14:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.335 07:14:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.335 07:14:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.594 07:14:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.594 07:14:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.594 07:14:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:59.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:21:59.594 00:21:59.594 --- 10.0.0.2 ping statistics --- 00:21:59.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.594 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:21:59.594 07:14:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:59.594 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.594 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:21:59.594 00:21:59.594 --- 10.0.0.3 ping statistics --- 00:21:59.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.594 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:59.594 07:14:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:21:59.594 00:21:59.594 --- 10.0.0.1 ping statistics --- 00:21:59.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.594 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:59.594 07:14:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.594 07:14:43 -- nvmf/common.sh@421 -- # return 0 00:21:59.594 07:14:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:59.594 07:14:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.594 07:14:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:59.594 07:14:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:59.594 07:14:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.594 07:14:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:59.594 07:14:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:59.594 07:14:43 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:59.594 07:14:43 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:21:59.594 07:14:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:59.594 07:14:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:59.594 07:14:43 -- common/autotest_common.sh@10 -- # set +x 00:21:59.594 ************************************ 00:21:59.594 START TEST nvmf_digest_clean 00:21:59.594 ************************************ 00:21:59.594 07:14:43 -- common/autotest_common.sh@1104 -- # run_digest 00:21:59.594 07:14:43 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:21:59.594 07:14:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:59.594 07:14:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:59.594 07:14:43 -- common/autotest_common.sh@10 -- # set +x 00:21:59.594 07:14:43 -- nvmf/common.sh@469 -- # nvmfpid=85807 00:21:59.594 07:14:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:59.594 07:14:43 -- nvmf/common.sh@470 -- # waitforlisten 85807 00:21:59.594 07:14:43 -- common/autotest_common.sh@819 -- # '[' -z 85807 ']' 00:21:59.594 07:14:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.594 07:14:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:59.594 07:14:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.594 07:14:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:59.594 07:14:43 -- common/autotest_common.sh@10 -- # set +x 00:21:59.594 [2024-07-11 07:14:43.511632] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:59.594 [2024-07-11 07:14:43.511688] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.594 [2024-07-11 07:14:43.648397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.852 [2024-07-11 07:14:43.775437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:59.852 [2024-07-11 07:14:43.775639] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.852 [2024-07-11 07:14:43.775657] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.852 [2024-07-11 07:14:43.775670] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.852 [2024-07-11 07:14:43.775714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.786 07:14:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:00.786 07:14:44 -- common/autotest_common.sh@852 -- # return 0 00:22:00.786 07:14:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:00.786 07:14:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:00.786 07:14:44 -- common/autotest_common.sh@10 -- # set +x 00:22:00.786 07:14:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.786 07:14:44 -- host/digest.sh@120 -- # common_target_config 00:22:00.786 07:14:44 -- host/digest.sh@43 -- # rpc_cmd 00:22:00.786 07:14:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.786 07:14:44 -- common/autotest_common.sh@10 -- # set +x 00:22:00.786 null0 00:22:00.786 [2024-07-11 07:14:44.671382] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.786 [2024-07-11 07:14:44.695543] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.786 07:14:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.786 07:14:44 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:00.786 07:14:44 -- host/digest.sh@77 -- # local rw bs qd 00:22:00.786 07:14:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:00.786 07:14:44 -- host/digest.sh@80 -- # rw=randread 00:22:00.786 07:14:44 -- host/digest.sh@80 -- # bs=4096 00:22:00.786 07:14:44 -- host/digest.sh@80 -- # qd=128 00:22:00.786 07:14:44 -- host/digest.sh@82 -- # bperfpid=85857 00:22:00.786 07:14:44 -- host/digest.sh@83 -- # waitforlisten 85857 /var/tmp/bperf.sock 00:22:00.786 07:14:44 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:00.786 07:14:44 -- common/autotest_common.sh@819 -- # '[' -z 85857 ']' 00:22:00.786 07:14:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:00.786 07:14:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:00.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:00.786 07:14:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:00.786 07:14:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:00.786 07:14:44 -- common/autotest_common.sh@10 -- # set +x 00:22:00.786 [2024-07-11 07:14:44.757634] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:00.786 [2024-07-11 07:14:44.757722] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85857 ] 00:22:01.083 [2024-07-11 07:14:44.896370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.083 [2024-07-11 07:14:44.996081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.648 07:14:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:01.648 07:14:45 -- common/autotest_common.sh@852 -- # return 0 00:22:01.648 07:14:45 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:01.648 07:14:45 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:01.648 07:14:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:01.906 07:14:45 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:01.906 07:14:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:02.472 nvme0n1 00:22:02.472 07:14:46 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:02.472 07:14:46 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:02.472 Running I/O for 2 seconds... 00:22:04.371 00:22:04.371 Latency(us) 00:22:04.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.371 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:04.371 nvme0n1 : 2.00 23894.16 93.34 0.00 0.00 5351.26 2249.08 13226.36 00:22:04.371 =================================================================================================================== 00:22:04.371 Total : 23894.16 93.34 0.00 0.00 5351.26 2249.08 13226.36 00:22:04.371 0 00:22:04.371 07:14:48 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:04.371 07:14:48 -- host/digest.sh@92 -- # get_accel_stats 00:22:04.371 07:14:48 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:04.371 07:14:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:04.371 07:14:48 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:04.371 | select(.opcode=="crc32c") 00:22:04.371 | "\(.module_name) \(.executed)"' 00:22:04.628 07:14:48 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:04.628 07:14:48 -- host/digest.sh@93 -- # exp_module=software 00:22:04.628 07:14:48 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:04.628 07:14:48 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:04.628 07:14:48 -- host/digest.sh@97 -- # killprocess 85857 00:22:04.628 07:14:48 -- common/autotest_common.sh@926 -- # '[' -z 85857 ']' 00:22:04.628 07:14:48 -- common/autotest_common.sh@930 -- # kill -0 85857 00:22:04.628 07:14:48 -- common/autotest_common.sh@931 -- # uname 00:22:04.628 07:14:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:04.628 07:14:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85857 00:22:04.885 07:14:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:04.885 07:14:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:04.885 killing process with pid 85857 00:22:04.885 07:14:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85857' 00:22:04.885 Received shutdown signal, test time was about 2.000000 seconds 00:22:04.885 00:22:04.885 Latency(us) 00:22:04.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.885 =================================================================================================================== 00:22:04.885 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:04.885 07:14:48 -- common/autotest_common.sh@945 -- # kill 85857 00:22:04.885 07:14:48 -- common/autotest_common.sh@950 -- # wait 85857 00:22:04.885 07:14:48 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:04.885 07:14:48 -- host/digest.sh@77 -- # local rw bs qd 00:22:04.885 07:14:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:04.885 07:14:48 -- host/digest.sh@80 -- # rw=randread 00:22:04.885 07:14:48 -- host/digest.sh@80 -- # bs=131072 00:22:04.885 07:14:48 -- host/digest.sh@80 -- # qd=16 00:22:04.885 07:14:48 -- host/digest.sh@82 -- # bperfpid=85943 00:22:04.885 07:14:48 -- host/digest.sh@83 -- # waitforlisten 85943 /var/tmp/bperf.sock 00:22:04.885 07:14:48 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:04.885 07:14:48 -- common/autotest_common.sh@819 -- # '[' -z 85943 ']' 00:22:04.885 07:14:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:04.885 07:14:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:04.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:04.885 07:14:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:04.885 07:14:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:04.885 07:14:48 -- common/autotest_common.sh@10 -- # set +x 00:22:05.143 [2024-07-11 07:14:48.960512] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:05.143 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:05.143 Zero copy mechanism will not be used. 00:22:05.143 [2024-07-11 07:14:48.960596] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85943 ] 00:22:05.143 [2024-07-11 07:14:49.091080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.143 [2024-07-11 07:14:49.172925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.074 07:14:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:06.074 07:14:49 -- common/autotest_common.sh@852 -- # return 0 00:22:06.074 07:14:49 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:06.074 07:14:49 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:06.074 07:14:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:06.332 07:14:50 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:06.332 07:14:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:06.590 nvme0n1 00:22:06.590 07:14:50 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:06.591 07:14:50 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:06.591 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:06.591 Zero copy mechanism will not be used. 00:22:06.591 Running I/O for 2 seconds... 00:22:08.494 00:22:08.494 Latency(us) 00:22:08.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.494 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:08.494 nvme0n1 : 2.00 8923.83 1115.48 0.00 0.00 1790.30 688.87 6762.12 00:22:08.494 =================================================================================================================== 00:22:08.494 Total : 8923.83 1115.48 0.00 0.00 1790.30 688.87 6762.12 00:22:08.494 0 00:22:08.494 07:14:52 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:08.494 07:14:52 -- host/digest.sh@92 -- # get_accel_stats 00:22:08.494 07:14:52 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:08.494 07:14:52 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:08.494 | select(.opcode=="crc32c") 00:22:08.494 | "\(.module_name) \(.executed)"' 00:22:08.494 07:14:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:08.753 07:14:52 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:08.753 07:14:52 -- host/digest.sh@93 -- # exp_module=software 00:22:08.753 07:14:52 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:08.753 07:14:52 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:08.753 07:14:52 -- host/digest.sh@97 -- # killprocess 85943 00:22:08.753 07:14:52 -- common/autotest_common.sh@926 -- # '[' -z 85943 ']' 00:22:08.753 07:14:52 -- common/autotest_common.sh@930 -- # kill -0 85943 00:22:08.753 07:14:52 -- common/autotest_common.sh@931 -- # uname 00:22:08.753 07:14:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:08.753 07:14:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85943 00:22:09.012 07:14:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:09.012 killing process with pid 85943 00:22:09.012 07:14:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:09.012 07:14:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85943' 00:22:09.012 Received shutdown signal, test time was about 2.000000 seconds 00:22:09.012 00:22:09.012 Latency(us) 00:22:09.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.012 =================================================================================================================== 00:22:09.012 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:09.012 07:14:52 -- common/autotest_common.sh@945 -- # kill 85943 00:22:09.012 07:14:52 -- common/autotest_common.sh@950 -- # wait 85943 00:22:09.012 07:14:53 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:09.012 07:14:53 -- host/digest.sh@77 -- # local rw bs qd 00:22:09.012 07:14:53 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:09.012 07:14:53 -- host/digest.sh@80 -- # rw=randwrite 00:22:09.012 07:14:53 -- host/digest.sh@80 -- # bs=4096 00:22:09.012 07:14:53 -- host/digest.sh@80 -- # qd=128 00:22:09.012 07:14:53 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:09.012 07:14:53 -- host/digest.sh@82 -- # bperfpid=86034 00:22:09.012 07:14:53 -- host/digest.sh@83 -- # waitforlisten 86034 /var/tmp/bperf.sock 00:22:09.012 07:14:53 -- common/autotest_common.sh@819 -- # '[' -z 86034 ']' 00:22:09.012 07:14:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:09.012 07:14:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:09.012 07:14:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:09.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:09.012 07:14:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:09.012 07:14:53 -- common/autotest_common.sh@10 -- # set +x 00:22:09.271 [2024-07-11 07:14:53.083213] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:09.271 [2024-07-11 07:14:53.083294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86034 ] 00:22:09.271 [2024-07-11 07:14:53.213800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.271 [2024-07-11 07:14:53.302095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.204 07:14:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:10.204 07:14:54 -- common/autotest_common.sh@852 -- # return 0 00:22:10.204 07:14:54 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:10.204 07:14:54 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:10.204 07:14:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:10.461 07:14:54 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:10.461 07:14:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:10.718 nvme0n1 00:22:10.718 07:14:54 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:10.718 07:14:54 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:10.718 Running I/O for 2 seconds... 00:22:13.247 00:22:13.247 Latency(us) 00:22:13.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.247 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:13.247 nvme0n1 : 2.00 28978.02 113.20 0.00 0.00 4412.60 1839.48 14537.08 00:22:13.247 =================================================================================================================== 00:22:13.247 Total : 28978.02 113.20 0.00 0.00 4412.60 1839.48 14537.08 00:22:13.247 0 00:22:13.247 07:14:56 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:13.247 07:14:56 -- host/digest.sh@92 -- # get_accel_stats 00:22:13.247 07:14:56 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:13.247 07:14:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:13.247 07:14:56 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:13.247 | select(.opcode=="crc32c") 00:22:13.247 | "\(.module_name) \(.executed)"' 00:22:13.247 07:14:56 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:13.247 07:14:56 -- host/digest.sh@93 -- # exp_module=software 00:22:13.247 07:14:56 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:13.247 07:14:56 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:13.247 07:14:56 -- host/digest.sh@97 -- # killprocess 86034 00:22:13.247 07:14:56 -- common/autotest_common.sh@926 -- # '[' -z 86034 ']' 00:22:13.247 07:14:56 -- common/autotest_common.sh@930 -- # kill -0 86034 00:22:13.248 07:14:56 -- common/autotest_common.sh@931 -- # uname 00:22:13.248 07:14:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:13.248 07:14:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86034 00:22:13.248 07:14:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:13.248 07:14:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:13.248 killing process with pid 86034 00:22:13.248 07:14:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86034' 00:22:13.248 Received shutdown signal, test time was about 2.000000 seconds 00:22:13.248 00:22:13.248 Latency(us) 00:22:13.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.248 =================================================================================================================== 00:22:13.248 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:13.248 07:14:57 -- common/autotest_common.sh@945 -- # kill 86034 00:22:13.248 07:14:57 -- common/autotest_common.sh@950 -- # wait 86034 00:22:13.248 07:14:57 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:13.248 07:14:57 -- host/digest.sh@77 -- # local rw bs qd 00:22:13.248 07:14:57 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:13.248 07:14:57 -- host/digest.sh@80 -- # rw=randwrite 00:22:13.248 07:14:57 -- host/digest.sh@80 -- # bs=131072 00:22:13.248 07:14:57 -- host/digest.sh@80 -- # qd=16 00:22:13.248 07:14:57 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:13.248 07:14:57 -- host/digest.sh@82 -- # bperfpid=86124 00:22:13.248 07:14:57 -- host/digest.sh@83 -- # waitforlisten 86124 /var/tmp/bperf.sock 00:22:13.248 07:14:57 -- common/autotest_common.sh@819 -- # '[' -z 86124 ']' 00:22:13.248 07:14:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:13.248 07:14:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:13.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:13.248 07:14:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:13.248 07:14:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:13.248 07:14:57 -- common/autotest_common.sh@10 -- # set +x 00:22:13.248 [2024-07-11 07:14:57.272348] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:13.248 [2024-07-11 07:14:57.272445] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86124 ] 00:22:13.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:13.248 Zero copy mechanism will not be used. 00:22:13.507 [2024-07-11 07:14:57.395620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.507 [2024-07-11 07:14:57.471768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.441 07:14:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:14.441 07:14:58 -- common/autotest_common.sh@852 -- # return 0 00:22:14.441 07:14:58 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:14.441 07:14:58 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:14.441 07:14:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:14.699 07:14:58 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:14.699 07:14:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:14.958 nvme0n1 00:22:14.958 07:14:58 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:14.958 07:14:58 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:14.958 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:14.958 Zero copy mechanism will not be used. 00:22:14.958 Running I/O for 2 seconds... 00:22:17.486 00:22:17.486 Latency(us) 00:22:17.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.486 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:17.486 nvme0n1 : 2.00 7880.85 985.11 0.00 0.00 2025.93 1712.87 10724.07 00:22:17.486 =================================================================================================================== 00:22:17.486 Total : 7880.85 985.11 0.00 0.00 2025.93 1712.87 10724.07 00:22:17.486 0 00:22:17.486 07:15:00 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:17.486 07:15:00 -- host/digest.sh@92 -- # get_accel_stats 00:22:17.486 07:15:00 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:17.486 07:15:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:17.486 07:15:00 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:17.486 | select(.opcode=="crc32c") 00:22:17.486 | "\(.module_name) \(.executed)"' 00:22:17.486 07:15:01 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:17.486 07:15:01 -- host/digest.sh@93 -- # exp_module=software 00:22:17.486 07:15:01 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:17.486 07:15:01 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:17.486 07:15:01 -- host/digest.sh@97 -- # killprocess 86124 00:22:17.486 07:15:01 -- common/autotest_common.sh@926 -- # '[' -z 86124 ']' 00:22:17.486 07:15:01 -- common/autotest_common.sh@930 -- # kill -0 86124 00:22:17.486 07:15:01 -- common/autotest_common.sh@931 -- # uname 00:22:17.486 07:15:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:17.486 07:15:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86124 00:22:17.486 killing process with pid 86124 00:22:17.486 Received shutdown signal, test time was about 2.000000 seconds 00:22:17.486 00:22:17.486 Latency(us) 00:22:17.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.486 =================================================================================================================== 00:22:17.486 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.486 07:15:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:17.486 07:15:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:17.486 07:15:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86124' 00:22:17.486 07:15:01 -- common/autotest_common.sh@945 -- # kill 86124 00:22:17.486 07:15:01 -- common/autotest_common.sh@950 -- # wait 86124 00:22:17.486 07:15:01 -- host/digest.sh@126 -- # killprocess 85807 00:22:17.486 07:15:01 -- common/autotest_common.sh@926 -- # '[' -z 85807 ']' 00:22:17.486 07:15:01 -- common/autotest_common.sh@930 -- # kill -0 85807 00:22:17.486 07:15:01 -- common/autotest_common.sh@931 -- # uname 00:22:17.486 07:15:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:17.486 07:15:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85807 00:22:17.486 07:15:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:17.486 07:15:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:17.486 killing process with pid 85807 00:22:17.486 07:15:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85807' 00:22:17.486 07:15:01 -- common/autotest_common.sh@945 -- # kill 85807 00:22:17.486 07:15:01 -- common/autotest_common.sh@950 -- # wait 85807 00:22:17.744 00:22:17.744 real 0m18.278s 00:22:17.744 user 0m33.259s 00:22:17.744 sys 0m5.277s 00:22:17.744 07:15:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:17.744 07:15:01 -- common/autotest_common.sh@10 -- # set +x 00:22:17.744 ************************************ 00:22:17.744 END TEST nvmf_digest_clean 00:22:17.744 ************************************ 00:22:17.744 07:15:01 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:17.744 07:15:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:17.744 07:15:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:17.744 07:15:01 -- common/autotest_common.sh@10 -- # set +x 00:22:17.744 ************************************ 00:22:17.744 START TEST nvmf_digest_error 00:22:17.744 ************************************ 00:22:17.744 07:15:01 -- common/autotest_common.sh@1104 -- # run_digest_error 00:22:17.744 07:15:01 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:17.744 07:15:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:17.744 07:15:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:17.744 07:15:01 -- common/autotest_common.sh@10 -- # set +x 00:22:17.744 07:15:01 -- nvmf/common.sh@469 -- # nvmfpid=86236 00:22:17.744 07:15:01 -- nvmf/common.sh@470 -- # waitforlisten 86236 00:22:17.744 07:15:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:17.744 07:15:01 -- common/autotest_common.sh@819 -- # '[' -z 86236 ']' 00:22:17.744 07:15:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.744 07:15:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:17.744 07:15:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.744 07:15:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:17.744 07:15:01 -- common/autotest_common.sh@10 -- # set +x 00:22:18.005 [2024-07-11 07:15:01.854596] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:18.005 [2024-07-11 07:15:01.854695] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.005 [2024-07-11 07:15:01.992664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.264 [2024-07-11 07:15:02.141061] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:18.264 [2024-07-11 07:15:02.141224] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.264 [2024-07-11 07:15:02.141238] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.264 [2024-07-11 07:15:02.141246] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.264 [2024-07-11 07:15:02.141284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.831 07:15:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:18.831 07:15:02 -- common/autotest_common.sh@852 -- # return 0 00:22:18.831 07:15:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:18.831 07:15:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:18.831 07:15:02 -- common/autotest_common.sh@10 -- # set +x 00:22:18.831 07:15:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.831 07:15:02 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:18.831 07:15:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.831 07:15:02 -- common/autotest_common.sh@10 -- # set +x 00:22:18.831 [2024-07-11 07:15:02.817780] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:18.831 07:15:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.831 07:15:02 -- host/digest.sh@104 -- # common_target_config 00:22:18.831 07:15:02 -- host/digest.sh@43 -- # rpc_cmd 00:22:18.831 07:15:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.831 07:15:02 -- common/autotest_common.sh@10 -- # set +x 00:22:19.089 null0 00:22:19.089 [2024-07-11 07:15:02.953910] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.089 [2024-07-11 07:15:02.978066] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.089 07:15:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.089 07:15:02 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:19.089 07:15:02 -- host/digest.sh@54 -- # local rw bs qd 00:22:19.089 07:15:02 -- host/digest.sh@56 -- # rw=randread 00:22:19.089 07:15:02 -- host/digest.sh@56 -- # bs=4096 00:22:19.089 07:15:02 -- host/digest.sh@56 -- # qd=128 00:22:19.089 07:15:02 -- host/digest.sh@58 -- # bperfpid=86280 00:22:19.089 07:15:02 -- host/digest.sh@60 -- # waitforlisten 86280 /var/tmp/bperf.sock 00:22:19.089 07:15:02 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:19.089 07:15:02 -- common/autotest_common.sh@819 -- # '[' -z 86280 ']' 00:22:19.089 07:15:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:19.089 07:15:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:19.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:19.090 07:15:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:19.090 07:15:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:19.090 07:15:02 -- common/autotest_common.sh@10 -- # set +x 00:22:19.090 [2024-07-11 07:15:03.031738] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:19.090 [2024-07-11 07:15:03.031863] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86280 ] 00:22:19.348 [2024-07-11 07:15:03.162038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.348 [2024-07-11 07:15:03.273605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.915 07:15:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:19.915 07:15:03 -- common/autotest_common.sh@852 -- # return 0 00:22:19.915 07:15:03 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:19.915 07:15:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:20.480 07:15:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:20.480 07:15:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.480 07:15:04 -- common/autotest_common.sh@10 -- # set +x 00:22:20.480 07:15:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.480 07:15:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:20.480 07:15:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:20.738 nvme0n1 00:22:20.738 07:15:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:20.738 07:15:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.738 07:15:04 -- common/autotest_common.sh@10 -- # set +x 00:22:20.738 07:15:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.738 07:15:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:20.738 07:15:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:20.738 Running I/O for 2 seconds... 00:22:20.738 [2024-07-11 07:15:04.690353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.738 [2024-07-11 07:15:04.690420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.738 [2024-07-11 07:15:04.690436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.738 [2024-07-11 07:15:04.699078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.738 [2024-07-11 07:15:04.699113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.738 [2024-07-11 07:15:04.699142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.738 [2024-07-11 07:15:04.711611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.738 [2024-07-11 07:15:04.711646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.739 [2024-07-11 07:15:04.711674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.739 [2024-07-11 07:15:04.723868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.739 [2024-07-11 07:15:04.723903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.739 [2024-07-11 07:15:04.723931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.739 [2024-07-11 07:15:04.736648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.739 [2024-07-11 07:15:04.736683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.739 [2024-07-11 07:15:04.736711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.739 [2024-07-11 07:15:04.745100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.739 [2024-07-11 07:15:04.745134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.739 [2024-07-11 07:15:04.745161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.739 [2024-07-11 07:15:04.759042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.739 [2024-07-11 07:15:04.759077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.739 [2024-07-11 07:15:04.759105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.739 [2024-07-11 07:15:04.769076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.739 [2024-07-11 07:15:04.769111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.739 [2024-07-11 07:15:04.769138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.739 [2024-07-11 07:15:04.778776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.739 [2024-07-11 07:15:04.778809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.739 [2024-07-11 07:15:04.778821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.739 [2024-07-11 07:15:04.789093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.739 [2024-07-11 07:15:04.789127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.739 [2024-07-11 07:15:04.789154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.801780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.998 [2024-07-11 07:15:04.801814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.998 [2024-07-11 07:15:04.801841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.810975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.998 [2024-07-11 07:15:04.811008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.998 [2024-07-11 07:15:04.811036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.820327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.998 [2024-07-11 07:15:04.820362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.998 [2024-07-11 07:15:04.820389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.830642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.998 [2024-07-11 07:15:04.830694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.998 [2024-07-11 07:15:04.830706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.840751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.998 [2024-07-11 07:15:04.840800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.998 [2024-07-11 07:15:04.840828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.851080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.998 [2024-07-11 07:15:04.851114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.998 [2024-07-11 07:15:04.851141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.860461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.998 [2024-07-11 07:15:04.860493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.998 [2024-07-11 07:15:04.860520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.872984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.998 [2024-07-11 07:15:04.873019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.998 [2024-07-11 07:15:04.873046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.885009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.998 [2024-07-11 07:15:04.885042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.998 [2024-07-11 07:15:04.885069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.897125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.998 [2024-07-11 07:15:04.897159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.998 [2024-07-11 07:15:04.897186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.909298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.998 [2024-07-11 07:15:04.909332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.998 [2024-07-11 07:15:04.909360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.921029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.998 [2024-07-11 07:15:04.921064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.998 [2024-07-11 07:15:04.921091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.929863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.998 [2024-07-11 07:15:04.929897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.998 [2024-07-11 07:15:04.929924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.998 [2024-07-11 07:15:04.939961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.999 [2024-07-11 07:15:04.939995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.999 [2024-07-11 07:15:04.940023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.999 [2024-07-11 07:15:04.952306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.999 [2024-07-11 07:15:04.952340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.999 [2024-07-11 07:15:04.952367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.999 [2024-07-11 07:15:04.962068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.999 [2024-07-11 07:15:04.962102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.999 [2024-07-11 07:15:04.962129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.999 [2024-07-11 07:15:04.971752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.999 [2024-07-11 07:15:04.971786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.999 [2024-07-11 07:15:04.971813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.999 [2024-07-11 07:15:04.982440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.999 [2024-07-11 07:15:04.982498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.999 [2024-07-11 07:15:04.982511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.999 [2024-07-11 07:15:04.991619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.999 [2024-07-11 07:15:04.991652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.999 [2024-07-11 07:15:04.991680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.999 [2024-07-11 07:15:05.000934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.999 [2024-07-11 07:15:05.000968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.999 [2024-07-11 07:15:05.000996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.999 [2024-07-11 07:15:05.009821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.999 [2024-07-11 07:15:05.009854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.999 [2024-07-11 07:15:05.009881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.999 [2024-07-11 07:15:05.018959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.999 [2024-07-11 07:15:05.018992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.999 [2024-07-11 07:15:05.019019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.999 [2024-07-11 07:15:05.030698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.999 [2024-07-11 07:15:05.030763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.999 [2024-07-11 07:15:05.030775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.999 [2024-07-11 07:15:05.039885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.999 [2024-07-11 07:15:05.039919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.999 [2024-07-11 07:15:05.039946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:20.999 [2024-07-11 07:15:05.049062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:20.999 [2024-07-11 07:15:05.049095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.999 [2024-07-11 07:15:05.049123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.058261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.058337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.058350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.066136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.066170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.066197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.077618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.077651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.077679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.090016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.090052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.090080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.101218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.101254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.101281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.112919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.112953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.112980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.122668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.122721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.122734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.132497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.132530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.132557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.142019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.142054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.142081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.151748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.151782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.151810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.162013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.162048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.162075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.171861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.171895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.171923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.183293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.183326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.183354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.191252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.191286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.191314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.203125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.203159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.203186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.215731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.215765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.215793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.226784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.226818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.226845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.237548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.237580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.237608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.246789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.246822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.246849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.257082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.257132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.257160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.267040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.267091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.267119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.277697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.277752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.277765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.289315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.289350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.289378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.299917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.299951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.299979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.259 [2024-07-11 07:15:05.311427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.259 [2024-07-11 07:15:05.311469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.259 [2024-07-11 07:15:05.311497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.324661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.324732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.324747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.337247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.337296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.337324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.349205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.349256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.349285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.358452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.358515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.358528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.367676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.367726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.367754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.377453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.377513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.377542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.387175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.387227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.387255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.396963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.397013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.397042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.407418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.407478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.407507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.418822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.418873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.418901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.428721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.428774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.428786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.439747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.439798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.439826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.449573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.449624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.449653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.459245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.459296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.459324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.470549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.470614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.470658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.483399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.483474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.483487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.492672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.492740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.492753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.502662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.502727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.502755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.515631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.515665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.515693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.527975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.528008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.528036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.519 [2024-07-11 07:15:05.537456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.519 [2024-07-11 07:15:05.537503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.519 [2024-07-11 07:15:05.537515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.520 [2024-07-11 07:15:05.547693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.520 [2024-07-11 07:15:05.547726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.520 [2024-07-11 07:15:05.547754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.520 [2024-07-11 07:15:05.556282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.520 [2024-07-11 07:15:05.556316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.520 [2024-07-11 07:15:05.556343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.520 [2024-07-11 07:15:05.565502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.520 [2024-07-11 07:15:05.565531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.520 [2024-07-11 07:15:05.565543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.520 [2024-07-11 07:15:05.575117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.520 [2024-07-11 07:15:05.575151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.520 [2024-07-11 07:15:05.575178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.586517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.586569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.586582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.599198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.599234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.599261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.611222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.611256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.611283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.619247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.619281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.619309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.631631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.631665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.631692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.643681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.643715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.643742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.655275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.655308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.655335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.667928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.667962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.667989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.679976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.680009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.680037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.688433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.688476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.688504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.699909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.699945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.699972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.712347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.712381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.712409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.725061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.725096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.725122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.733910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.733943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.733971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.743266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.743300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.743327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.751827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.751861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.751888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.763063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.763097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.763124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.772453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.772485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.772513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.782064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.782096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.779 [2024-07-11 07:15:05.782124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.779 [2024-07-11 07:15:05.791396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.779 [2024-07-11 07:15:05.791431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.780 [2024-07-11 07:15:05.791468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.780 [2024-07-11 07:15:05.802218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.780 [2024-07-11 07:15:05.802253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.780 [2024-07-11 07:15:05.802305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.780 [2024-07-11 07:15:05.814945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.780 [2024-07-11 07:15:05.814979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.780 [2024-07-11 07:15:05.815006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.780 [2024-07-11 07:15:05.827209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.780 [2024-07-11 07:15:05.827242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.780 [2024-07-11 07:15:05.827270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.780 [2024-07-11 07:15:05.837120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:21.780 [2024-07-11 07:15:05.837153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.780 [2024-07-11 07:15:05.837180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.847927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.847978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.847991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.859510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.859543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.859570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.871508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.871541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.871568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.882996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.883030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.883057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.892317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.892351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.892378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.901983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.902020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.902047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.910962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.910995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.911022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.920702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.920735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.920763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.933203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.933238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.933265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.945821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.945856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.945883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.957637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.957688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.957700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.969426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.969486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.969498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.978046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.978079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.978106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:05.990022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:05.990055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:05.990083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:06.000751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:06.000786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:06.000814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:06.012571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:06.012605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:06.012633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:06.024319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:06.024354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:06.024381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:06.036869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:06.036904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:06.036932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:06.045415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:06.045474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:06.045488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:06.057659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:06.057693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:06.057720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:06.066997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:06.067030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:06.067058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:06.077247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:06.077280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:06.077308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.039 [2024-07-11 07:15:06.088139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.039 [2024-07-11 07:15:06.088173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.039 [2024-07-11 07:15:06.088200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.298 [2024-07-11 07:15:06.098062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.298 [2024-07-11 07:15:06.098130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.298 [2024-07-11 07:15:06.098159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.298 [2024-07-11 07:15:06.108124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.298 [2024-07-11 07:15:06.108157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.298 [2024-07-11 07:15:06.108185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.298 [2024-07-11 07:15:06.120218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.298 [2024-07-11 07:15:06.120253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.120281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.129178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.129213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.129240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.137401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.137435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.137473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.146793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.146826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.146853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.156194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.156229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.156257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.165789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.165839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.165867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.177049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.177084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.177111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.188425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.188469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.188496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.196858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.196891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.196919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.207018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.207051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.207077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.219591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.219643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.219655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.231927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.231960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.231987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.242438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.242504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.242517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.252084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.252119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.252146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.262545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.262597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.262610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.272246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.272296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.272324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.282140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.282191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.282218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.294482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.294548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.294561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.306569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.306623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.306636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.316502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.316548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.316560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.327985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.328019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.328047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.339902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.339935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.339962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.299 [2024-07-11 07:15:06.352792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.299 [2024-07-11 07:15:06.352874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.299 [2024-07-11 07:15:06.352903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.558 [2024-07-11 07:15:06.365039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.558 [2024-07-11 07:15:06.365073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.558 [2024-07-11 07:15:06.365101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.558 [2024-07-11 07:15:06.376493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.558 [2024-07-11 07:15:06.376541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.558 [2024-07-11 07:15:06.376552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.558 [2024-07-11 07:15:06.385830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.558 [2024-07-11 07:15:06.385864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.558 [2024-07-11 07:15:06.385891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.558 [2024-07-11 07:15:06.395846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.558 [2024-07-11 07:15:06.395881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.558 [2024-07-11 07:15:06.395908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.558 [2024-07-11 07:15:06.407557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.558 [2024-07-11 07:15:06.407590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.558 [2024-07-11 07:15:06.407617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.558 [2024-07-11 07:15:06.417390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.558 [2024-07-11 07:15:06.417423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.558 [2024-07-11 07:15:06.417451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.558 [2024-07-11 07:15:06.427577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.558 [2024-07-11 07:15:06.427611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.427640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.436055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.436089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.436116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.446093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.446126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.446154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.455833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.455867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.455894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.467216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.467250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.467278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.479308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.479342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.479370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.490970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.491002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.491029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.501944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.501994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.502018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.515179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.515229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.515257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.524802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.524867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.524894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.534096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.534149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.534178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.543908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.543959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.543988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.553532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.553581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.553610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.562865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.562915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.562943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.572525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.572574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.572602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.582138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.582188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.582216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.592219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.592269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.592297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.603584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.603634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.603662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.559 [2024-07-11 07:15:06.615569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.559 [2024-07-11 07:15:06.615632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.559 [2024-07-11 07:15:06.615662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.818 [2024-07-11 07:15:06.624811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.818 [2024-07-11 07:15:06.624863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.818 [2024-07-11 07:15:06.624876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.818 [2024-07-11 07:15:06.635107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.818 [2024-07-11 07:15:06.635160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.818 [2024-07-11 07:15:06.635188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.818 [2024-07-11 07:15:06.645643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.818 [2024-07-11 07:15:06.645679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.818 [2024-07-11 07:15:06.645708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.818 [2024-07-11 07:15:06.656770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.818 [2024-07-11 07:15:06.656820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.818 [2024-07-11 07:15:06.656848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.818 [2024-07-11 07:15:06.665377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x200c230) 00:22:22.818 [2024-07-11 07:15:06.665427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.818 [2024-07-11 07:15:06.665456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.818 00:22:22.818 Latency(us) 00:22:22.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.818 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:22.818 nvme0n1 : 2.00 23889.70 93.32 0.00 0.00 5352.74 2561.86 18826.71 00:22:22.818 =================================================================================================================== 00:22:22.818 Total : 23889.70 93.32 0.00 0.00 5352.74 2561.86 18826.71 00:22:22.818 0 00:22:22.818 07:15:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:22.818 07:15:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:22.818 07:15:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:22.818 | .driver_specific 00:22:22.818 | .nvme_error 00:22:22.818 | .status_code 00:22:22.818 | .command_transient_transport_error' 00:22:22.818 07:15:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:23.077 07:15:06 -- host/digest.sh@71 -- # (( 187 > 0 )) 00:22:23.077 07:15:06 -- host/digest.sh@73 -- # killprocess 86280 00:22:23.077 07:15:06 -- common/autotest_common.sh@926 -- # '[' -z 86280 ']' 00:22:23.077 07:15:06 -- common/autotest_common.sh@930 -- # kill -0 86280 00:22:23.077 07:15:06 -- common/autotest_common.sh@931 -- # uname 00:22:23.077 07:15:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:23.077 07:15:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86280 00:22:23.077 killing process with pid 86280 00:22:23.077 Received shutdown signal, test time was about 2.000000 seconds 00:22:23.077 00:22:23.077 Latency(us) 00:22:23.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.077 =================================================================================================================== 00:22:23.077 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.077 07:15:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:23.077 07:15:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:23.077 07:15:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86280' 00:22:23.077 07:15:06 -- common/autotest_common.sh@945 -- # kill 86280 00:22:23.077 07:15:06 -- common/autotest_common.sh@950 -- # wait 86280 00:22:23.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:23.339 07:15:07 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:23.339 07:15:07 -- host/digest.sh@54 -- # local rw bs qd 00:22:23.339 07:15:07 -- host/digest.sh@56 -- # rw=randread 00:22:23.339 07:15:07 -- host/digest.sh@56 -- # bs=131072 00:22:23.339 07:15:07 -- host/digest.sh@56 -- # qd=16 00:22:23.339 07:15:07 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:23.339 07:15:07 -- host/digest.sh@58 -- # bperfpid=86366 00:22:23.340 07:15:07 -- host/digest.sh@60 -- # waitforlisten 86366 /var/tmp/bperf.sock 00:22:23.340 07:15:07 -- common/autotest_common.sh@819 -- # '[' -z 86366 ']' 00:22:23.340 07:15:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:23.340 07:15:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:23.340 07:15:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:23.340 07:15:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:23.340 07:15:07 -- common/autotest_common.sh@10 -- # set +x 00:22:23.340 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:23.340 Zero copy mechanism will not be used. 00:22:23.340 [2024-07-11 07:15:07.213980] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:23.340 [2024-07-11 07:15:07.214080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86366 ] 00:22:23.340 [2024-07-11 07:15:07.342008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.614 [2024-07-11 07:15:07.428013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.195 07:15:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:24.195 07:15:08 -- common/autotest_common.sh@852 -- # return 0 00:22:24.195 07:15:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:24.195 07:15:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:24.454 07:15:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:24.454 07:15:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.454 07:15:08 -- common/autotest_common.sh@10 -- # set +x 00:22:24.454 07:15:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.454 07:15:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:24.454 07:15:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:24.712 nvme0n1 00:22:24.712 07:15:08 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:24.712 07:15:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.712 07:15:08 -- common/autotest_common.sh@10 -- # set +x 00:22:24.712 07:15:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.712 07:15:08 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:24.712 07:15:08 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:24.971 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:24.971 Zero copy mechanism will not be used. 00:22:24.971 Running I/O for 2 seconds... 00:22:24.971 [2024-07-11 07:15:08.800695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.971 [2024-07-11 07:15:08.800754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.971 [2024-07-11 07:15:08.800783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.971 [2024-07-11 07:15:08.804430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.971 [2024-07-11 07:15:08.804489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.971 [2024-07-11 07:15:08.804501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.971 [2024-07-11 07:15:08.808542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.971 [2024-07-11 07:15:08.808590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.971 [2024-07-11 07:15:08.808618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.812475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.812509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.812535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.815959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.815992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.816019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.820620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.820655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.820682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.824811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.824844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.824871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.828914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.828946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.828973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.833036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.833069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.833096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.837166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.837199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.837226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.841210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.841260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.841287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.844838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.844873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.844899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.848795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.848828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.848854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.852468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.852502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.852529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.856810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.856846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.856872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.860917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.860950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.860977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.863716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.863773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.863800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.867759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.867808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.867852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.871373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.871407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.871434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.875145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.875179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.875208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.879095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.879128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.879155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.882733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.882767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.882794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.886102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.886134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.886161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.889948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.889997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.890024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.894426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.894488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.894518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.898471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.898536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.898549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.902599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.902652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.902681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.907311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.907360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.907388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.911504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.911566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.911594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.916010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.916059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.916087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.919971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.920021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.920049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.972 [2024-07-11 07:15:08.924110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.972 [2024-07-11 07:15:08.924159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.972 [2024-07-11 07:15:08.924187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.928333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.928382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.928410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.932380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.932431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.932468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.935900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.935954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.935966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.939421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.939481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.939510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.943588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.943638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.943649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.947173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.947222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.947250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.950844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.950893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.950920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.954350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.954385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.954414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.958256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.958344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.958358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.961951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.962000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.962028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.965932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.965981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.966008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.970111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.970159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.970186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.973765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.973814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.973842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.977795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.977845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.977857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.981952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.982001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.982029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.985673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.985723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.985751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.989456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.989504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.989531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.993505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.993555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.993582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:08.997512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:08.997588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:08.997601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:09.001306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:09.001357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:09.001370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:09.005426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:09.005484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:09.005512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:09.008065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:09.008112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:09.008140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:09.011711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:09.011759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:09.011786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:09.015228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:09.015276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:09.015303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:09.018984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:09.019034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:09.019062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:09.023049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:09.023097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:09.023125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:24.973 [2024-07-11 07:15:09.026731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:24.973 [2024-07-11 07:15:09.026779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.973 [2024-07-11 07:15:09.026806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.233 [2024-07-11 07:15:09.030806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.030856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.030884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.034403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.034497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.034526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.038437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.038502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.038516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.042550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.042589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.042603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.046134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.046185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.046197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.049554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.049601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.049629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.053362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.053412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.053439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.056977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.057026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.057053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.061079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.061127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.061155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.065049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.065098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.065125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.068704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.068754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.068782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.072933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.072982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.073010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.076260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.076309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.076336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.080136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.080185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.080212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.083727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.083776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.083804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.087227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.087275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.087303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.091032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.091081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.091108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.094920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.094970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.094997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.099044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.099096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.099110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.103094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.103143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.103171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.106659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.106724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.106752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.110917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.110967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.110994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.114386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.114423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.114435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.117315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.117363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.117391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.121222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.121272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.121284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.125114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.125164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.125175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.129158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.129191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.129219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.132716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.132748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.132775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.136152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.234 [2024-07-11 07:15:09.136186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.234 [2024-07-11 07:15:09.136213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.234 [2024-07-11 07:15:09.139440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.139481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.139509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.143897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.143960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.143986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.147860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.147893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.147920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.151309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.151342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.151370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.154592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.154674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.154701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.158566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.158633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.158645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.162068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.162101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.162127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.166142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.166194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.166206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.169746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.169798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.169810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.173186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.173237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.173249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.177187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.177221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.177249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.180866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.180899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.180926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.184165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.184198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.184226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.188213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.188246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.188273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.191749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.191783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.191810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.195204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.195236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.195265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.199350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.199382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.199408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.203044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.203076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.203103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.207275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.207308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.207336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.210922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.210954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.210982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.214744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.214791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.214818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.218795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.218828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.218855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.222393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.222471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.222485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.226134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.226182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.226209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.229154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.229187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.229214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.233325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.233359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.233387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.236856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.236891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.236918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.240557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.240591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.240618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.243822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.235 [2024-07-11 07:15:09.243855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.235 [2024-07-11 07:15:09.243882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.235 [2024-07-11 07:15:09.247718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.236 [2024-07-11 07:15:09.247752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.236 [2024-07-11 07:15:09.247779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.236 [2024-07-11 07:15:09.251667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.236 [2024-07-11 07:15:09.251701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.236 [2024-07-11 07:15:09.251728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.236 [2024-07-11 07:15:09.255652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.236 [2024-07-11 07:15:09.255686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.236 [2024-07-11 07:15:09.255714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.236 [2024-07-11 07:15:09.258834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.236 [2024-07-11 07:15:09.258867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.236 [2024-07-11 07:15:09.258894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.236 [2024-07-11 07:15:09.262796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.236 [2024-07-11 07:15:09.262830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.236 [2024-07-11 07:15:09.262857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.236 [2024-07-11 07:15:09.266460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.236 [2024-07-11 07:15:09.266523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.236 [2024-07-11 07:15:09.266535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.236 [2024-07-11 07:15:09.270214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.236 [2024-07-11 07:15:09.270246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.236 [2024-07-11 07:15:09.270273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.236 [2024-07-11 07:15:09.273635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.236 [2024-07-11 07:15:09.273687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.236 [2024-07-11 07:15:09.273699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.236 [2024-07-11 07:15:09.277680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.236 [2024-07-11 07:15:09.277727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.236 [2024-07-11 07:15:09.277754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.236 [2024-07-11 07:15:09.282064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.236 [2024-07-11 07:15:09.282098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.236 [2024-07-11 07:15:09.282125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.236 [2024-07-11 07:15:09.285603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.236 [2024-07-11 07:15:09.285651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.236 [2024-07-11 07:15:09.285678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.236 [2024-07-11 07:15:09.290356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.236 [2024-07-11 07:15:09.290410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.236 [2024-07-11 07:15:09.290423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.496 [2024-07-11 07:15:09.294121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.496 [2024-07-11 07:15:09.294153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.496 [2024-07-11 07:15:09.294180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.496 [2024-07-11 07:15:09.298641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.496 [2024-07-11 07:15:09.298692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.496 [2024-07-11 07:15:09.298719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.496 [2024-07-11 07:15:09.302275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.496 [2024-07-11 07:15:09.302368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.496 [2024-07-11 07:15:09.302397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.496 [2024-07-11 07:15:09.306520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.496 [2024-07-11 07:15:09.306556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.496 [2024-07-11 07:15:09.306584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.496 [2024-07-11 07:15:09.309931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.496 [2024-07-11 07:15:09.309963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.496 [2024-07-11 07:15:09.309990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.496 [2024-07-11 07:15:09.313778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.496 [2024-07-11 07:15:09.313828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.496 [2024-07-11 07:15:09.313871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.496 [2024-07-11 07:15:09.317303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.496 [2024-07-11 07:15:09.317337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.496 [2024-07-11 07:15:09.317364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.496 [2024-07-11 07:15:09.320733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.496 [2024-07-11 07:15:09.320767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.496 [2024-07-11 07:15:09.320794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.496 [2024-07-11 07:15:09.324575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.496 [2024-07-11 07:15:09.324626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.496 [2024-07-11 07:15:09.324639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.496 [2024-07-11 07:15:09.328100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.328150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.328177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.331953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.332002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.332030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.336296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.336346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.336374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.340316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.340363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.340375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.344134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.344166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.344193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.348050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.348084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.348112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.351813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.351878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.351905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.356043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.356075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.356101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.360275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.360306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.360333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.364041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.364074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.364101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.367985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.368017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.368044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.371970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.372003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.372030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.375347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.375379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.375407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.378776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.378809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.378820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.382445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.382488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.382500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.385918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.385951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.385979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.389470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.389503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.389530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.393263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.393296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.393323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.396221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.396255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.396283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.400334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.400367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.400394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.403791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.403824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.403851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.407606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.407639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.407666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.411333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.411366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.411392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.414798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.414830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.414858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.418651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.418685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.418711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.422531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.422564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.422592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.425843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.425874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.425901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.429188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.429221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.429248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.432981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.497 [2024-07-11 07:15:09.433015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.497 [2024-07-11 07:15:09.433042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.497 [2024-07-11 07:15:09.436841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.436875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.436901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.440125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.440158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.440185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.444019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.444052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.444079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.447475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.447505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.447533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.450953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.450986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.451013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.454633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.454665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.454693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.458607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.458640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.458667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.461900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.461931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.461959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.465870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.465902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.465929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.470084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.470117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.470146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.474263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.474335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.474348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.478024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.478057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.478084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.481619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.481650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.481677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.485224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.485256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.485283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.489134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.489166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.489193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.492759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.492792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.492818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.496779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.496828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.496855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.500436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.500478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.500505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.503222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.503255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.503282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.507023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.507055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.507083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.511271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.511304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.511332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.514055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.514088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.514115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.517603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.517636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.517663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.520939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.520972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.520999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.524185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.524218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.524245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.527970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.528003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.528030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.531715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.531748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.531775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.535238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.535270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.535297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.539302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.539333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.498 [2024-07-11 07:15:09.539361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.498 [2024-07-11 07:15:09.543094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.498 [2024-07-11 07:15:09.543142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.499 [2024-07-11 07:15:09.543170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.499 [2024-07-11 07:15:09.546692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.499 [2024-07-11 07:15:09.546724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.499 [2024-07-11 07:15:09.546752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.499 [2024-07-11 07:15:09.551021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.499 [2024-07-11 07:15:09.551056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.499 [2024-07-11 07:15:09.551083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.759 [2024-07-11 07:15:09.555418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.759 [2024-07-11 07:15:09.555491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.759 [2024-07-11 07:15:09.555520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.759 [2024-07-11 07:15:09.559048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.759 [2024-07-11 07:15:09.559080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.759 [2024-07-11 07:15:09.559107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.563367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.563401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.563429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.566734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.566767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.566794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.570529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.570564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.570593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.574818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.574850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.574877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.577899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.577930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.577957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.581103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.581136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.581164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.584644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.584676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.584704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.588390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.588423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.588450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.591891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.591924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.591951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.596383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.596415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.596443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.599368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.599418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.599430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.603841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.603875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.603902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.607582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.607615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.607642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.611269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.611302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.611330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.615055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.615087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.615114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.618333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.618369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.618382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.621809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.621859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.621871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.625473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.625523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.625535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.629274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.629306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.629333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.633270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.633304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.633332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.637246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.637279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.637306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.641004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.641038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.641065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.644639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.644672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.644699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.648211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.648244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.648272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.652042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.652073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.652100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.655883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.655915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.655941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.659326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.659359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.659386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.663171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.663204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.663230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.760 [2024-07-11 07:15:09.666705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.760 [2024-07-11 07:15:09.666737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.760 [2024-07-11 07:15:09.666765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.670347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.670382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.670409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.674206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.674257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.674269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.677828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.677880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.677892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.681341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.681375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.681402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.685240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.685273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.685300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.689071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.689105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.689132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.692044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.692077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.692104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.695797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.695831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.695858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.699415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.699459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.699488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.703506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.703539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.703567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.706912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.706946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.706973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.710939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.710972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.711000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.714653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.714735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.714747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.718029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.718062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.718089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.721959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.721993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.722021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.725766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.725799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.725826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.728615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.728648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.728675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.731903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.731937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.731964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.735531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.735565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.735592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.739420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.739463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.739491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.742397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.742433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.742459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.746243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.746276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.746327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.750275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.750353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.750381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.753849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.753883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.753911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.757436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.757481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.757509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.761257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.761290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.761317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.764987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.765019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.765046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.768014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.768048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.768075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.772080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.761 [2024-07-11 07:15:09.772114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.761 [2024-07-11 07:15:09.772141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.761 [2024-07-11 07:15:09.775375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.762 [2024-07-11 07:15:09.775409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.762 [2024-07-11 07:15:09.775435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.762 [2024-07-11 07:15:09.779335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.762 [2024-07-11 07:15:09.779369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.762 [2024-07-11 07:15:09.779396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.762 [2024-07-11 07:15:09.782983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.762 [2024-07-11 07:15:09.783016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.762 [2024-07-11 07:15:09.783044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.762 [2024-07-11 07:15:09.786720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.762 [2024-07-11 07:15:09.786754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.762 [2024-07-11 07:15:09.786781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.762 [2024-07-11 07:15:09.790519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.762 [2024-07-11 07:15:09.790558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.762 [2024-07-11 07:15:09.790570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.762 [2024-07-11 07:15:09.794682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.762 [2024-07-11 07:15:09.794713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.762 [2024-07-11 07:15:09.794740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.762 [2024-07-11 07:15:09.798851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.762 [2024-07-11 07:15:09.798882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.762 [2024-07-11 07:15:09.798909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.762 [2024-07-11 07:15:09.802682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.762 [2024-07-11 07:15:09.802749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.762 [2024-07-11 07:15:09.802760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.762 [2024-07-11 07:15:09.806048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.762 [2024-07-11 07:15:09.806080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.762 [2024-07-11 07:15:09.806107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.762 [2024-07-11 07:15:09.809848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.762 [2024-07-11 07:15:09.809881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.762 [2024-07-11 07:15:09.809907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.762 [2024-07-11 07:15:09.813526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:25.762 [2024-07-11 07:15:09.813587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.762 [2024-07-11 07:15:09.813614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.022 [2024-07-11 07:15:09.817070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.022 [2024-07-11 07:15:09.817120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.022 [2024-07-11 07:15:09.817148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.022 [2024-07-11 07:15:09.820382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.022 [2024-07-11 07:15:09.820415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.022 [2024-07-11 07:15:09.820443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.022 [2024-07-11 07:15:09.823937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.022 [2024-07-11 07:15:09.823987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.824015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.827707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.827740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.827767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.831614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.831664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.831691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.835191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.835225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.835252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.838730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.838791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.838818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.842884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.842919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.842946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.846726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.846760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.846788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.850032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.850082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.850094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.853241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.853274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.853300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.857474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.857505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.857531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.860706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.860741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.860768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.864381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.864415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.864442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.867482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.867514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.867542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.871337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.871370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.871398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.874968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.875002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.875030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.878926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.878958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.878985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.882555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.882590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.882638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.886018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.886050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.886076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.890159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.890192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.890219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.894229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.894263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.894314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.898393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.898427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.898454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.902087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.902138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.902150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.905176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.905227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.905239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.909069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.909119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.909131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.912542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.912575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.912602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.916931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.916965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.916992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.920531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.920562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.920590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.923769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.923803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.923829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.927437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.927481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.023 [2024-07-11 07:15:09.927508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.023 [2024-07-11 07:15:09.931169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.023 [2024-07-11 07:15:09.931202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.931229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.935187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.935221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.935248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.938117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.938150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.938177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.942028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.942079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.942107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.945328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.945378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.945406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.949016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.949048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.949075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.953208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.953240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.953267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.957283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.957316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.957343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.960460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.960493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.960521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.963953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.963986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.964012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.967786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.967820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.967847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.971402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.971436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.971475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.974801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.974833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.974860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.978220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.978254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.978304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.982356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.982390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.982417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.986451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.986491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.986502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.990648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.990680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.990709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.994776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.994807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.994833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:09.998898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:09.998929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:09.998955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:10.003184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:10.003228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:10.003242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:10.007188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:10.007220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:10.007248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:10.011295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:10.011328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:10.011355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:10.014576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:10.014628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:10.014655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:10.018837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:10.018870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:10.018897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:10.023243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:10.023326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:10.023339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:10.028009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:10.028059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:10.028086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:10.031572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:10.031621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:10.031648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:10.035557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:10.035589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:10.035617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:10.039539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:10.039571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.024 [2024-07-11 07:15:10.039597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.024 [2024-07-11 07:15:10.043110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.024 [2024-07-11 07:15:10.043143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.025 [2024-07-11 07:15:10.043170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.025 [2024-07-11 07:15:10.047530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.025 [2024-07-11 07:15:10.047563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.025 [2024-07-11 07:15:10.047590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.025 [2024-07-11 07:15:10.051545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.025 [2024-07-11 07:15:10.051576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.025 [2024-07-11 07:15:10.051604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.025 [2024-07-11 07:15:10.055228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.025 [2024-07-11 07:15:10.055259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.025 [2024-07-11 07:15:10.055287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.025 [2024-07-11 07:15:10.059228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.025 [2024-07-11 07:15:10.059259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.025 [2024-07-11 07:15:10.059285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.025 [2024-07-11 07:15:10.063182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.025 [2024-07-11 07:15:10.063214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.025 [2024-07-11 07:15:10.063241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.025 [2024-07-11 07:15:10.067409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.025 [2024-07-11 07:15:10.067440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.025 [2024-07-11 07:15:10.067478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.025 [2024-07-11 07:15:10.071296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.025 [2024-07-11 07:15:10.071331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.025 [2024-07-11 07:15:10.071358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.025 [2024-07-11 07:15:10.074978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.025 [2024-07-11 07:15:10.075011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.025 [2024-07-11 07:15:10.075038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.025 [2024-07-11 07:15:10.079221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.025 [2024-07-11 07:15:10.079255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.025 [2024-07-11 07:15:10.079283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.083521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.083553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.083580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.087628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.087677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.087704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.091246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.091298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.091326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.095035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.095084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.095112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.099119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.099169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.099196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.103319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.103370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.103398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.107254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.107288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.107315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.110933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.110966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.110994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.114818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.114852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.114879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.118196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.118230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.118257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.122376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.122416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.122430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.126243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.126337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.126352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.130086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.130140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.130153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.134401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.134440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.134465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.138644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.138682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.138695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.142979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.143029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.143057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.147175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.147224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.147251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.285 [2024-07-11 07:15:10.151057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.285 [2024-07-11 07:15:10.151106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.285 [2024-07-11 07:15:10.151134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.155349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.155396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.155423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.159104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.159152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.159181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.162897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.162945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.162973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.167001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.167052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.167066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.170855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.170907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.170935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.174374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.174411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.174440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.178135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.178186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.178197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.181810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.181863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.181875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.185519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.185571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.185583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.189605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.189656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.189669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.192801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.192851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.192878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.196870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.196918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.196946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.200488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.200536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.200564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.204515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.204564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.204591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.208397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.208469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.208483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.212035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.212084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.212111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.215912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.215962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.215986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.219881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.219930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.219957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.224074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.224122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.224150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.227871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.227919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.227947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.231707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.231757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.231784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.234301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.234365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.234392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.238382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.238431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.238469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.242086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.242134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.242162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.245595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.245639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.245665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.249550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.249600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.249612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.253138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.253186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.253213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.257314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.257362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.286 [2024-07-11 07:15:10.257390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.286 [2024-07-11 07:15:10.260986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.286 [2024-07-11 07:15:10.261034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.261062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.265276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.265325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.265352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.268623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.268671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.268698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.272545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.272594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.272621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.276149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.276198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.276225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.280826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.280875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.280902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.284795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.284845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.284873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.288774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.288823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.288850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.292487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.292535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.292563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.295940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.295990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.296018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.300036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.300088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.300100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.303846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.303895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.303923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.307486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.307544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.307571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.311026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.311097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.311110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.314695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.314763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.314774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.318529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.318581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.318624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.322196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.322246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.322274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.325844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.325893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.325906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.329704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.329756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.329768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.333288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.333321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.333349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.336940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.336972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.337000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.287 [2024-07-11 07:15:10.341378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.287 [2024-07-11 07:15:10.341430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.287 [2024-07-11 07:15:10.341500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.547 [2024-07-11 07:15:10.345751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.547 [2024-07-11 07:15:10.345818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.547 [2024-07-11 07:15:10.345845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.547 [2024-07-11 07:15:10.349781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.547 [2024-07-11 07:15:10.349876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.547 [2024-07-11 07:15:10.349888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.547 [2024-07-11 07:15:10.354024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.547 [2024-07-11 07:15:10.354056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.547 [2024-07-11 07:15:10.354083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.547 [2024-07-11 07:15:10.358389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.547 [2024-07-11 07:15:10.358427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.358440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.363015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.363049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.363076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.366953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.366986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.367013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.371349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.371383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.371410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.374659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.374715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.374743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.378334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.378388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.378400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.382045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.382079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.382106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.385832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.385867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.385889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.389186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.389219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.389246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.393073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.393106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.393133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.396715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.396748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.396774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.400099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.400132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.400159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.404320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.404353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.404380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.407914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.407947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.407973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.411538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.411569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.411597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.415559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.415607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.415635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.419474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.419506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.419534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.422534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.422586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.422629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.426392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.426457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.426471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.429500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.429550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.429561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.432974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.433025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.433037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.437093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.437126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.437153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.441048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.441081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.441108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.444887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.444920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.444947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.449179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.449211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.449239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.452769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.452801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.452828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.456647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.456680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.456707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.460128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.460161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.460189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.464542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.548 [2024-07-11 07:15:10.464575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.548 [2024-07-11 07:15:10.464601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.548 [2024-07-11 07:15:10.467887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.467921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.467948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.471552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.471585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.471612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.475218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.475252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.475278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.479242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.479275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.479302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.483100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.483132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.483158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.486851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.486883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.486910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.490730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.490761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.490788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.494437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.494486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.494499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.498246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.498285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.498313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.501728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.501776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.501804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.505364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.505414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.505426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.508184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.508217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.508244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.512025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.512058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.512085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.515536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.515568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.515595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.518820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.518853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.518879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.522765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.522828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.522855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.526360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.526394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.526422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.529825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.529873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.529900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.533349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.533381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.533408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.537202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.537234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.537260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.541209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.541242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.541269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.545112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.545144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.545171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.548601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.548634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.548661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.551922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.551955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.551982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.555419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.555464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.555492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.559644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.559693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.559719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.563495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.563527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.563554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.567016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.567047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.567073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.570473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.549 [2024-07-11 07:15:10.570522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.549 [2024-07-11 07:15:10.570549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.549 [2024-07-11 07:15:10.574688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.550 [2024-07-11 07:15:10.574754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.550 [2024-07-11 07:15:10.574782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.550 [2024-07-11 07:15:10.578322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.550 [2024-07-11 07:15:10.578378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.550 [2024-07-11 07:15:10.578406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.550 [2024-07-11 07:15:10.581288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.550 [2024-07-11 07:15:10.581321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.550 [2024-07-11 07:15:10.581349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.550 [2024-07-11 07:15:10.585259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.550 [2024-07-11 07:15:10.585293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.550 [2024-07-11 07:15:10.585305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.550 [2024-07-11 07:15:10.588566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.550 [2024-07-11 07:15:10.588600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.550 [2024-07-11 07:15:10.588611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.550 [2024-07-11 07:15:10.592574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.550 [2024-07-11 07:15:10.592608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.550 [2024-07-11 07:15:10.592620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.550 [2024-07-11 07:15:10.596213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.550 [2024-07-11 07:15:10.596247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.550 [2024-07-11 07:15:10.596259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.550 [2024-07-11 07:15:10.599743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.550 [2024-07-11 07:15:10.599776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.550 [2024-07-11 07:15:10.599788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.550 [2024-07-11 07:15:10.604294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.550 [2024-07-11 07:15:10.604328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.550 [2024-07-11 07:15:10.604341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.608069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.608102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.608115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.611417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.611477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.611491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.615594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.615629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.615641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.618878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.618912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.618924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.621987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.622022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.622034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.626095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.626129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.626140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.629336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.629370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.629382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.633339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.633373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.633385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.636809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.636843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.636854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.640313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.640346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.640358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.644201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.644234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.644245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.647753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.647786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.647797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.652091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.652126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.652138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.655677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.655711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.655723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.659195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.659229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.659240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.663577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.663610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.663623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.667617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.667650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.667662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.671355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.671391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.671402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.674817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.674851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.674863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.678815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.678850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.678863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.682128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.682162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.682173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.686030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.686066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.686078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.689573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.689608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.689636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.693347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.693382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.810 [2024-07-11 07:15:10.693394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.810 [2024-07-11 07:15:10.696532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.810 [2024-07-11 07:15:10.696566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.696578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.700490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.700522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.700534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.703831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.703865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.703877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.707654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.707688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.707716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.711378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.711414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.711427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.714998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.715033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.715044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.718250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.718292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.718321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.721595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.721629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.721656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.725888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.725922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.725934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.729117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.729151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.729163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.732506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.732539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.732550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.736191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.736225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.736237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.739937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.739971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.739983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.743651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.743687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.743715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.747715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.747749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.747777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.751775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.751808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.751836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.755912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.755945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.755956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.760084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.760117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.760129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.763189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.763221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.763233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.767200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.767234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.767246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.771077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.771109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.771120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.774591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.774655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.774682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.779161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.779195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.779207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.783041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.783074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.783086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.786267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.786341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.786370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.790015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.790166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.790182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.811 [2024-07-11 07:15:10.794268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1daba30) 00:22:26.811 [2024-07-11 07:15:10.794517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.811 [2024-07-11 07:15:10.794711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.811 00:22:26.811 Latency(us) 00:22:26.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.811 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:26.811 nvme0n1 : 2.00 8203.57 1025.45 0.00 0.00 1947.39 543.65 5064.15 00:22:26.811 =================================================================================================================== 00:22:26.811 Total : 8203.57 1025.45 0.00 0.00 1947.39 543.65 5064.15 00:22:26.811 0 00:22:26.811 07:15:10 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:26.811 07:15:10 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:26.812 07:15:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:26.812 07:15:10 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:26.812 | .driver_specific 00:22:26.812 | .nvme_error 00:22:26.812 | .status_code 00:22:26.812 | .command_transient_transport_error' 00:22:27.070 07:15:11 -- host/digest.sh@71 -- # (( 529 > 0 )) 00:22:27.070 07:15:11 -- host/digest.sh@73 -- # killprocess 86366 00:22:27.070 07:15:11 -- common/autotest_common.sh@926 -- # '[' -z 86366 ']' 00:22:27.070 07:15:11 -- common/autotest_common.sh@930 -- # kill -0 86366 00:22:27.070 07:15:11 -- common/autotest_common.sh@931 -- # uname 00:22:27.070 07:15:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:27.070 07:15:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86366 00:22:27.070 killing process with pid 86366 00:22:27.070 Received shutdown signal, test time was about 2.000000 seconds 00:22:27.070 00:22:27.070 Latency(us) 00:22:27.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.070 =================================================================================================================== 00:22:27.070 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.070 07:15:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:27.070 07:15:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:27.070 07:15:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86366' 00:22:27.070 07:15:11 -- common/autotest_common.sh@945 -- # kill 86366 00:22:27.070 07:15:11 -- common/autotest_common.sh@950 -- # wait 86366 00:22:27.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:27.329 07:15:11 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:27.329 07:15:11 -- host/digest.sh@54 -- # local rw bs qd 00:22:27.329 07:15:11 -- host/digest.sh@56 -- # rw=randwrite 00:22:27.329 07:15:11 -- host/digest.sh@56 -- # bs=4096 00:22:27.329 07:15:11 -- host/digest.sh@56 -- # qd=128 00:22:27.329 07:15:11 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:27.329 07:15:11 -- host/digest.sh@58 -- # bperfpid=86451 00:22:27.329 07:15:11 -- host/digest.sh@60 -- # waitforlisten 86451 /var/tmp/bperf.sock 00:22:27.329 07:15:11 -- common/autotest_common.sh@819 -- # '[' -z 86451 ']' 00:22:27.329 07:15:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:27.329 07:15:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:27.329 07:15:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:27.329 07:15:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:27.329 07:15:11 -- common/autotest_common.sh@10 -- # set +x 00:22:27.329 [2024-07-11 07:15:11.356440] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:27.329 [2024-07-11 07:15:11.356719] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86451 ] 00:22:27.587 [2024-07-11 07:15:11.490461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.587 [2024-07-11 07:15:11.581707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.521 07:15:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:28.521 07:15:12 -- common/autotest_common.sh@852 -- # return 0 00:22:28.521 07:15:12 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:28.521 07:15:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:28.521 07:15:12 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:28.521 07:15:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.521 07:15:12 -- common/autotest_common.sh@10 -- # set +x 00:22:28.521 07:15:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.521 07:15:12 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:28.521 07:15:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:28.779 nvme0n1 00:22:29.038 07:15:12 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:29.038 07:15:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.038 07:15:12 -- common/autotest_common.sh@10 -- # set +x 00:22:29.038 07:15:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.038 07:15:12 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:29.038 07:15:12 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:29.038 Running I/O for 2 seconds... 00:22:29.038 [2024-07-11 07:15:12.993860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f6890 00:22:29.038 [2024-07-11 07:15:12.994216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.038 [2024-07-11 07:15:12.994259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:29.038 [2024-07-11 07:15:13.003623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ee5c8 00:22:29.038 [2024-07-11 07:15:13.004073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.038 [2024-07-11 07:15:13.004106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:29.038 [2024-07-11 07:15:13.012528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f6020 00:22:29.038 [2024-07-11 07:15:13.013383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.038 [2024-07-11 07:15:13.013432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:29.038 [2024-07-11 07:15:13.021296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ee5c8 00:22:29.038 [2024-07-11 07:15:13.021896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.038 [2024-07-11 07:15:13.021930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:29.038 [2024-07-11 07:15:13.030074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f57b0 00:22:29.038 [2024-07-11 07:15:13.030679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.038 [2024-07-11 07:15:13.030730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:29.038 [2024-07-11 07:15:13.038914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f0350 00:22:29.038 [2024-07-11 07:15:13.039448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.038 [2024-07-11 07:15:13.039496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:29.038 [2024-07-11 07:15:13.047685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fac10 00:22:29.038 [2024-07-11 07:15:13.048192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.038 [2024-07-11 07:15:13.048224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:29.038 [2024-07-11 07:15:13.056151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fd208 00:22:29.038 [2024-07-11 07:15:13.057348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.038 [2024-07-11 07:15:13.057379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:29.038 [2024-07-11 07:15:13.065140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ec840 00:22:29.038 [2024-07-11 07:15:13.065446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.038 [2024-07-11 07:15:13.065479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:29.038 [2024-07-11 07:15:13.074056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e6738 00:22:29.038 [2024-07-11 07:15:13.074593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.038 [2024-07-11 07:15:13.074627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:29.038 [2024-07-11 07:15:13.083663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fda78 00:22:29.038 [2024-07-11 07:15:13.085070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.038 [2024-07-11 07:15:13.085118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:29.038 [2024-07-11 07:15:13.092738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ddc00 00:22:29.038 [2024-07-11 07:15:13.093191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.038 [2024-07-11 07:15:13.093224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:29.297 [2024-07-11 07:15:13.101471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eee38 00:22:29.297 [2024-07-11 07:15:13.102331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.297 [2024-07-11 07:15:13.102380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:29.297 [2024-07-11 07:15:13.110410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f46d0 00:22:29.298 [2024-07-11 07:15:13.110931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.110964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.119252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e9168 00:22:29.298 [2024-07-11 07:15:13.119789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.119819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.128110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e0630 00:22:29.298 [2024-07-11 07:15:13.128721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.128755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.136926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e3498 00:22:29.298 [2024-07-11 07:15:13.137435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.137477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.145665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190edd58 00:22:29.298 [2024-07-11 07:15:13.146237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.146271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.153661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f20d8 00:22:29.298 [2024-07-11 07:15:13.153818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.153837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.164709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ef270 00:22:29.298 [2024-07-11 07:15:13.165249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.165291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.173172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190df988 00:22:29.298 [2024-07-11 07:15:13.174352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.174400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.183089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e7c50 00:22:29.298 [2024-07-11 07:15:13.183661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.183694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.192173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ebb98 00:22:29.298 [2024-07-11 07:15:13.193056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.193104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.201223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f2948 00:22:29.298 [2024-07-11 07:15:13.202272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.202359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.210297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f4b08 00:22:29.298 [2024-07-11 07:15:13.211829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.211860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.219110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f9b30 00:22:29.298 [2024-07-11 07:15:13.220056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.220086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.228126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190de038 00:22:29.298 [2024-07-11 07:15:13.229485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.229542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.237266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e6b70 00:22:29.298 [2024-07-11 07:15:13.237836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.237871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.244949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eaab8 00:22:29.298 [2024-07-11 07:15:13.246021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.246052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.254000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e73e0 00:22:29.298 [2024-07-11 07:15:13.255118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.255148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.264023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fa7d8 00:22:29.298 [2024-07-11 07:15:13.264744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.264793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.272099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eb760 00:22:29.298 [2024-07-11 07:15:13.273611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.273641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.280934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ee5c8 00:22:29.298 [2024-07-11 07:15:13.282549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.282596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.288958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ee190 00:22:29.298 [2024-07-11 07:15:13.289753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.289800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.299726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f46d0 00:22:29.298 [2024-07-11 07:15:13.300335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.300412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.307319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fef90 00:22:29.298 [2024-07-11 07:15:13.308457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.308529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.316206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e2c28 00:22:29.298 [2024-07-11 07:15:13.317279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.317311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.324981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fb048 00:22:29.298 [2024-07-11 07:15:13.325983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.326015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.334527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ed0b0 00:22:29.298 [2024-07-11 07:15:13.334955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.334987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.343408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ef270 00:22:29.298 [2024-07-11 07:15:13.344543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.344590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:29.298 [2024-07-11 07:15:13.352367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f8618 00:22:29.298 [2024-07-11 07:15:13.352591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.298 [2024-07-11 07:15:13.352637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.361459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e5658 00:22:29.558 [2024-07-11 07:15:13.362785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.362816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.370606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e88f8 00:22:29.558 [2024-07-11 07:15:13.370833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.370852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.379481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e7c50 00:22:29.558 [2024-07-11 07:15:13.379898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.379930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.388441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e88f8 00:22:29.558 [2024-07-11 07:15:13.389789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.389855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.397104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190edd58 00:22:29.558 [2024-07-11 07:15:13.398066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.398097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.406221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190dfdc0 00:22:29.558 [2024-07-11 07:15:13.406660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.406691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.414581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eaab8 00:22:29.558 [2024-07-11 07:15:13.414756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.414847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.426695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f7970 00:22:29.558 [2024-07-11 07:15:13.427381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.427430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.435365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e2c28 00:22:29.558 [2024-07-11 07:15:13.436912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.436943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.444741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fc560 00:22:29.558 [2024-07-11 07:15:13.445196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.445224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.453302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ef270 00:22:29.558 [2024-07-11 07:15:13.454180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.454230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.464139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e3060 00:22:29.558 [2024-07-11 07:15:13.465087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.465116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.470819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fb048 00:22:29.558 [2024-07-11 07:15:13.471036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.471097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.480485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fcdd0 00:22:29.558 [2024-07-11 07:15:13.481304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.481334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.489104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f4b08 00:22:29.558 [2024-07-11 07:15:13.490434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.490506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.498085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e1f80 00:22:29.558 [2024-07-11 07:15:13.499393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.499425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.508967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e6b70 00:22:29.558 [2024-07-11 07:15:13.509801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.509829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.515659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fb8b8 00:22:29.558 [2024-07-11 07:15:13.515763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.515781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.525486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f3e60 00:22:29.558 [2024-07-11 07:15:13.525943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.525973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:29.558 [2024-07-11 07:15:13.535551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ebfd0 00:22:29.558 [2024-07-11 07:15:13.536452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.558 [2024-07-11 07:15:13.536490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:29.559 [2024-07-11 07:15:13.543693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190df550 00:22:29.559 [2024-07-11 07:15:13.544264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.559 [2024-07-11 07:15:13.544298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:29.559 [2024-07-11 07:15:13.551597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eee38 00:22:29.559 [2024-07-11 07:15:13.551837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.559 [2024-07-11 07:15:13.551872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:29.559 [2024-07-11 07:15:13.561281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e6300 00:22:29.559 [2024-07-11 07:15:13.561668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.559 [2024-07-11 07:15:13.561699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:29.559 [2024-07-11 07:15:13.570162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e3498 00:22:29.559 [2024-07-11 07:15:13.570736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.559 [2024-07-11 07:15:13.570770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:29.559 [2024-07-11 07:15:13.578943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190dece0 00:22:29.559 [2024-07-11 07:15:13.579442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.559 [2024-07-11 07:15:13.579485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:29.559 [2024-07-11 07:15:13.587715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e0630 00:22:29.559 [2024-07-11 07:15:13.588197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.559 [2024-07-11 07:15:13.588228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:29.559 [2024-07-11 07:15:13.596559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190df988 00:22:29.559 [2024-07-11 07:15:13.597017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.559 [2024-07-11 07:15:13.597049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:29.559 [2024-07-11 07:15:13.605348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fa3a0 00:22:29.559 [2024-07-11 07:15:13.605792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.559 [2024-07-11 07:15:13.605823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:29.559 [2024-07-11 07:15:13.614142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e7818 00:22:29.559 [2024-07-11 07:15:13.615061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.559 [2024-07-11 07:15:13.615126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:29.817 [2024-07-11 07:15:13.622966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190de038 00:22:29.817 [2024-07-11 07:15:13.624215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.817 [2024-07-11 07:15:13.624246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:29.817 [2024-07-11 07:15:13.631791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190de470 00:22:29.817 [2024-07-11 07:15:13.632879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.817 [2024-07-11 07:15:13.632909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:29.817 [2024-07-11 07:15:13.641943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190dece0 00:22:29.817 [2024-07-11 07:15:13.642755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.817 [2024-07-11 07:15:13.642832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:29.817 [2024-07-11 07:15:13.649745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ea680 00:22:29.817 [2024-07-11 07:15:13.651201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.817 [2024-07-11 07:15:13.651248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:29.817 [2024-07-11 07:15:13.658454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fac10 00:22:29.817 [2024-07-11 07:15:13.659697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.817 [2024-07-11 07:15:13.659727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:29.817 [2024-07-11 07:15:13.667692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f1ca0 00:22:29.817 [2024-07-11 07:15:13.668094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.817 [2024-07-11 07:15:13.668122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:29.817 [2024-07-11 07:15:13.676629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f3e60 00:22:29.817 [2024-07-11 07:15:13.677181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.677215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.685396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e8d30 00:22:29.818 [2024-07-11 07:15:13.685934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.685966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.694165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f4b08 00:22:29.818 [2024-07-11 07:15:13.694721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.694754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.702997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f81e0 00:22:29.818 [2024-07-11 07:15:13.703480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.703520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.711794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f8618 00:22:29.818 [2024-07-11 07:15:13.712256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.712287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.720200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ec840 00:22:29.818 [2024-07-11 07:15:13.721559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.721605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.728784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fb480 00:22:29.818 [2024-07-11 07:15:13.729702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.729731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.737932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e6738 00:22:29.818 [2024-07-11 07:15:13.738196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.738230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.746916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f1868 00:22:29.818 [2024-07-11 07:15:13.747360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.747392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.755791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eee38 00:22:29.818 [2024-07-11 07:15:13.756193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.756222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.764696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ea680 00:22:29.818 [2024-07-11 07:15:13.765093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.765123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.773755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190de038 00:22:29.818 [2024-07-11 07:15:13.774621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.774702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.782784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190de470 00:22:29.818 [2024-07-11 07:15:13.783478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.783540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.791632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e27f0 00:22:29.818 [2024-07-11 07:15:13.792494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.792553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.800476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190de8a8 00:22:29.818 [2024-07-11 07:15:13.801284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.801332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.809247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fda78 00:22:29.818 [2024-07-11 07:15:13.810392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.810469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.819415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f3a28 00:22:29.818 [2024-07-11 07:15:13.820034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.820111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.826590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f0ff8 00:22:29.818 [2024-07-11 07:15:13.826727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.826748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.837844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e5a90 00:22:29.818 [2024-07-11 07:15:13.838388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.838422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.846678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eaab8 00:22:29.818 [2024-07-11 07:15:13.847180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.847212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.855654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eaab8 00:22:29.818 [2024-07-11 07:15:13.856136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.856167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.864490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e0ea0 00:22:29.818 [2024-07-11 07:15:13.865021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.865050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:29.818 [2024-07-11 07:15:13.873288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e7c50 00:22:29.818 [2024-07-11 07:15:13.874194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:29.818 [2024-07-11 07:15:13.874244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:13.881721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e5220 00:22:30.077 [2024-07-11 07:15:13.882697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:13.882763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:13.891960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eff18 00:22:30.077 [2024-07-11 07:15:13.892723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:13.892772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:13.900638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f2d80 00:22:30.077 [2024-07-11 07:15:13.902024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:13.902071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:13.910084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f81e0 00:22:30.077 [2024-07-11 07:15:13.910692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:13.910720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:13.918646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f6020 00:22:30.077 [2024-07-11 07:15:13.920005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:13.920054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:13.927883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f6458 00:22:30.077 [2024-07-11 07:15:13.928080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:13.928100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:13.936953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f9f68 00:22:30.077 [2024-07-11 07:15:13.938908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:13.938956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:13.946475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fe2e8 00:22:30.077 [2024-07-11 07:15:13.946558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:13.946578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:13.956753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ee190 00:22:30.077 [2024-07-11 07:15:13.958035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:13.958066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:13.965954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f92c0 00:22:30.077 [2024-07-11 07:15:13.967252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:13.967299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:13.974785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e4578 00:22:30.077 [2024-07-11 07:15:13.976063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:13.976094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:13.983701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f6890 00:22:30.077 [2024-07-11 07:15:13.985006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:13.985037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:13.992057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eaef0 00:22:30.077 [2024-07-11 07:15:13.993453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:13.993492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.001144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190edd58 00:22:30.077 [2024-07-11 07:15:14.001573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.001600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.010042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e5220 00:22:30.077 [2024-07-11 07:15:14.010942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.010989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.018867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e0a68 00:22:30.077 [2024-07-11 07:15:14.019415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.019458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.027653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f81e0 00:22:30.077 [2024-07-11 07:15:14.028174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.028206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.036497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f2510 00:22:30.077 [2024-07-11 07:15:14.037019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.037050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.045275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ea248 00:22:30.077 [2024-07-11 07:15:14.045763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.045795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.054071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e3d08 00:22:30.077 [2024-07-11 07:15:14.054642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.054676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.063177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fa3a0 00:22:30.077 [2024-07-11 07:15:14.063754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.063790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.072036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f6890 00:22:30.077 [2024-07-11 07:15:14.072538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.072566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.080895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fd208 00:22:30.077 [2024-07-11 07:15:14.081448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.081509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.089778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e49b0 00:22:30.077 [2024-07-11 07:15:14.090563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.090626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.098665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190edd58 00:22:30.077 [2024-07-11 07:15:14.099284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.099363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.107532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190de470 00:22:30.077 [2024-07-11 07:15:14.108097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.108131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.116457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f1430 00:22:30.077 [2024-07-11 07:15:14.117182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.077 [2024-07-11 07:15:14.117230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:30.077 [2024-07-11 07:15:14.125245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f46d0 00:22:30.078 [2024-07-11 07:15:14.125964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.078 [2024-07-11 07:15:14.126015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:30.078 [2024-07-11 07:15:14.134043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f6cc8 00:22:30.336 [2024-07-11 07:15:14.135216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.336 [2024-07-11 07:15:14.135266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:30.336 [2024-07-11 07:15:14.144731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f20d8 00:22:30.336 [2024-07-11 07:15:14.145669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.336 [2024-07-11 07:15:14.145698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:30.336 [2024-07-11 07:15:14.151438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190df118 00:22:30.336 [2024-07-11 07:15:14.151524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.336 [2024-07-11 07:15:14.151544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:30.336 [2024-07-11 07:15:14.162350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e1b48 00:22:30.336 [2024-07-11 07:15:14.162841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.336 [2024-07-11 07:15:14.162871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:30.336 [2024-07-11 07:15:14.171256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fe720 00:22:30.336 [2024-07-11 07:15:14.171911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.336 [2024-07-11 07:15:14.171976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:30.336 [2024-07-11 07:15:14.178976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e49b0 00:22:30.336 [2024-07-11 07:15:14.179200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.336 [2024-07-11 07:15:14.179242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:30.336 [2024-07-11 07:15:14.188690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e8088 00:22:30.336 [2024-07-11 07:15:14.189039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.336 [2024-07-11 07:15:14.189067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:30.336 [2024-07-11 07:15:14.197661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eaab8 00:22:30.336 [2024-07-11 07:15:14.197994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.336 [2024-07-11 07:15:14.198023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:30.336 [2024-07-11 07:15:14.206609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fda78 00:22:30.336 [2024-07-11 07:15:14.207697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.207728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.216104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f35f0 00:22:30.337 [2024-07-11 07:15:14.216690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.216752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.223740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e27f0 00:22:30.337 [2024-07-11 07:15:14.224850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.224881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.232697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eea00 00:22:30.337 [2024-07-11 07:15:14.233774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.233804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.240987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e5ec8 00:22:30.337 [2024-07-11 07:15:14.242039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.242069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.250872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e95a0 00:22:30.337 [2024-07-11 07:15:14.251542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.251590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.259057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190de470 00:22:30.337 [2024-07-11 07:15:14.259583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.259614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.267979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fcdd0 00:22:30.337 [2024-07-11 07:15:14.268208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.268227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.278252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e2c28 00:22:30.337 [2024-07-11 07:15:14.279570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.279617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.287972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eff18 00:22:30.337 [2024-07-11 07:15:14.288901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.288947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.294723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e6fa8 00:22:30.337 [2024-07-11 07:15:14.294920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.294938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.305559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f7100 00:22:30.337 [2024-07-11 07:15:14.306145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.306179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.313236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190df988 00:22:30.337 [2024-07-11 07:15:14.314164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.314211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.322025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eb760 00:22:30.337 [2024-07-11 07:15:14.323263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.323311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.330524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f2510 00:22:30.337 [2024-07-11 07:15:14.331611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.331657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.340360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f96f8 00:22:30.337 [2024-07-11 07:15:14.341030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.341079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.348549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fc128 00:22:30.337 [2024-07-11 07:15:14.349044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.349074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.357472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ebfd0 00:22:30.337 [2024-07-11 07:15:14.357702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.357726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.367772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f7100 00:22:30.337 [2024-07-11 07:15:14.369218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.369249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.375803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ec840 00:22:30.337 [2024-07-11 07:15:14.376824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.376854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:30.337 [2024-07-11 07:15:14.384437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ec840 00:22:30.337 [2024-07-11 07:15:14.385499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.337 [2024-07-11 07:15:14.385538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:30.596 [2024-07-11 07:15:14.395733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e6300 00:22:30.596 [2024-07-11 07:15:14.396678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.596 [2024-07-11 07:15:14.396726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:30.596 [2024-07-11 07:15:14.402786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f9f68 00:22:30.596 [2024-07-11 07:15:14.402974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.596 [2024-07-11 07:15:14.402992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:30.596 [2024-07-11 07:15:14.413737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190de470 00:22:30.596 [2024-07-11 07:15:14.414344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.596 [2024-07-11 07:15:14.414409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:30.596 [2024-07-11 07:15:14.421467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e4de8 00:22:30.596 [2024-07-11 07:15:14.422688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.596 [2024-07-11 07:15:14.422752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:30.596 [2024-07-11 07:15:14.429900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fc560 00:22:30.596 [2024-07-11 07:15:14.431090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.596 [2024-07-11 07:15:14.431138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:30.596 [2024-07-11 07:15:14.440907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e95a0 00:22:30.596 [2024-07-11 07:15:14.442630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.596 [2024-07-11 07:15:14.442683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.596 [2024-07-11 07:15:14.448995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e49b0 00:22:30.596 [2024-07-11 07:15:14.450203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.596 [2024-07-11 07:15:14.450233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:30.596 [2024-07-11 07:15:14.459577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e88f8 00:22:30.596 [2024-07-11 07:15:14.460282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.596 [2024-07-11 07:15:14.460330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:30.596 [2024-07-11 07:15:14.468000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e5ec8 00:22:30.596 [2024-07-11 07:15:14.469354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.596 [2024-07-11 07:15:14.469384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:30.596 [2024-07-11 07:15:14.477293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e8d30 00:22:30.596 [2024-07-11 07:15:14.477802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.596 [2024-07-11 07:15:14.477832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:30.596 [2024-07-11 07:15:14.488404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ddc00 00:22:30.596 [2024-07-11 07:15:14.489430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.596 [2024-07-11 07:15:14.489466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:30.596 [2024-07-11 07:15:14.494778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fd208 00:22:30.597 [2024-07-11 07:15:14.494942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.494961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.503712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f81e0 00:22:30.597 [2024-07-11 07:15:14.504072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.504104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.512537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eb760 00:22:30.597 [2024-07-11 07:15:14.512882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.512913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.521322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e5220 00:22:30.597 [2024-07-11 07:15:14.521593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.521644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.530135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f1430 00:22:30.597 [2024-07-11 07:15:14.530413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.530479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.539228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e9e10 00:22:30.597 [2024-07-11 07:15:14.539743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.539774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.548127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f96f8 00:22:30.597 [2024-07-11 07:15:14.548366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.548423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.558838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ef270 00:22:30.597 [2024-07-11 07:15:14.560171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.560203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.566748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190dece0 00:22:30.597 [2024-07-11 07:15:14.567820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.567850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.574941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190de470 00:22:30.597 [2024-07-11 07:15:14.575114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.575133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.583858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e38d0 00:22:30.597 [2024-07-11 07:15:14.584205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.584237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.592633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ec840 00:22:30.597 [2024-07-11 07:15:14.592987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.593015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.601469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e99d8 00:22:30.597 [2024-07-11 07:15:14.601762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.601782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.610230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eb760 00:22:30.597 [2024-07-11 07:15:14.610512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.610547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.619004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e5a90 00:22:30.597 [2024-07-11 07:15:14.619219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.619237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.627775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e95a0 00:22:30.597 [2024-07-11 07:15:14.627965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.627984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.636567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e5a90 00:22:30.597 [2024-07-11 07:15:14.636766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.636785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.647701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e5a90 00:22:30.597 [2024-07-11 07:15:14.648581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.597 [2024-07-11 07:15:14.648627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:30.597 [2024-07-11 07:15:14.654617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ec840 00:22:30.856 [2024-07-11 07:15:14.654764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.654784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.665711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e5ec8 00:22:30.856 [2024-07-11 07:15:14.666238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.666267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.674733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190eb760 00:22:30.856 [2024-07-11 07:15:14.675436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.675494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.683548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ef6a8 00:22:30.856 [2024-07-11 07:15:14.684226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.684274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.692315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e6fa8 00:22:30.856 [2024-07-11 07:15:14.692967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.693015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.701118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fa7d8 00:22:30.856 [2024-07-11 07:15:14.701745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.701823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.709922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fc998 00:22:30.856 [2024-07-11 07:15:14.710530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.710594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.718699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fb480 00:22:30.856 [2024-07-11 07:15:14.719305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.719384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.726513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ec840 00:22:30.856 [2024-07-11 07:15:14.726764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.726799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.737375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f7da8 00:22:30.856 [2024-07-11 07:15:14.738029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.738093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.745071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f7538 00:22:30.856 [2024-07-11 07:15:14.746317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.746366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.753905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ed920 00:22:30.856 [2024-07-11 07:15:14.755226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.755257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.762398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f8e88 00:22:30.856 [2024-07-11 07:15:14.763339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.763369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.771612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e6738 00:22:30.856 [2024-07-11 07:15:14.771840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.771859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.780542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ec408 00:22:30.856 [2024-07-11 07:15:14.780971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.781000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.789314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ea248 00:22:30.856 [2024-07-11 07:15:14.789707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.789736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.798110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fd640 00:22:30.856 [2024-07-11 07:15:14.798524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.856 [2024-07-11 07:15:14.798556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:30.856 [2024-07-11 07:15:14.806915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e3d08 00:22:30.857 [2024-07-11 07:15:14.807242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.857 [2024-07-11 07:15:14.807269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:30.857 [2024-07-11 07:15:14.815722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fb480 00:22:30.857 [2024-07-11 07:15:14.816021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.857 [2024-07-11 07:15:14.816055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:30.857 [2024-07-11 07:15:14.824521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f7970 00:22:30.857 [2024-07-11 07:15:14.824812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.857 [2024-07-11 07:15:14.824831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:30.857 [2024-07-11 07:15:14.833662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f9f68 00:22:30.857 [2024-07-11 07:15:14.834218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.857 [2024-07-11 07:15:14.834247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.857 [2024-07-11 07:15:14.842432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fa7d8 00:22:30.857 [2024-07-11 07:15:14.843016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.857 [2024-07-11 07:15:14.843050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:30.857 [2024-07-11 07:15:14.851280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ef270 00:22:30.857 [2024-07-11 07:15:14.852251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.857 [2024-07-11 07:15:14.852297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:30.857 [2024-07-11 07:15:14.860514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e6300 00:22:30.857 [2024-07-11 07:15:14.861438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.857 [2024-07-11 07:15:14.861494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:30.857 [2024-07-11 07:15:14.869691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ebb98 00:22:30.857 [2024-07-11 07:15:14.870001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.857 [2024-07-11 07:15:14.870025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:30.857 [2024-07-11 07:15:14.878869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e27f0 00:22:30.857 [2024-07-11 07:15:14.879610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.857 [2024-07-11 07:15:14.879659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:30.857 [2024-07-11 07:15:14.887825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190de8a8 00:22:30.857 [2024-07-11 07:15:14.888311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.857 [2024-07-11 07:15:14.888340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:30.857 [2024-07-11 07:15:14.895656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190e8088 00:22:30.857 [2024-07-11 07:15:14.895819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.857 [2024-07-11 07:15:14.895837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:30.857 [2024-07-11 07:15:14.905401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f31b8 00:22:30.857 [2024-07-11 07:15:14.905725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.857 [2024-07-11 07:15:14.905753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:30.857 [2024-07-11 07:15:14.914374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190df118 00:22:31.115 [2024-07-11 07:15:14.914943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.115 [2024-07-11 07:15:14.914989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:31.115 [2024-07-11 07:15:14.922911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190ef270 00:22:31.115 [2024-07-11 07:15:14.923209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.115 [2024-07-11 07:15:14.923235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:31.115 [2024-07-11 07:15:14.934063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fe720 00:22:31.115 [2024-07-11 07:15:14.934877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.115 [2024-07-11 07:15:14.934910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:31.116 [2024-07-11 07:15:14.942691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f7970 00:22:31.116 [2024-07-11 07:15:14.944731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.116 [2024-07-11 07:15:14.944794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.116 [2024-07-11 07:15:14.952327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190f6458 00:22:31.116 [2024-07-11 07:15:14.952798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.116 [2024-07-11 07:15:14.952829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:31.116 [2024-07-11 07:15:14.961171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190fdeb0 00:22:31.116 [2024-07-11 07:15:14.961768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.116 [2024-07-11 07:15:14.961804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:31.116 [2024-07-11 07:15:14.969779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190dfdc0 00:22:31.116 [2024-07-11 07:15:14.971401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.116 [2024-07-11 07:15:14.971459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:31.116 [2024-07-11 07:15:14.978723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201bd0) with pdu=0x2000190dfdc0 00:22:31.116 [2024-07-11 07:15:14.979633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.116 [2024-07-11 07:15:14.979679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:31.116 00:22:31.116 Latency(us) 00:22:31.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.116 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:31.116 nvme0n1 : 2.01 28289.48 110.51 0.00 0.00 4519.95 1869.27 13583.83 00:22:31.116 =================================================================================================================== 00:22:31.116 Total : 28289.48 110.51 0.00 0.00 4519.95 1869.27 13583.83 00:22:31.116 0 00:22:31.116 07:15:14 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:31.116 07:15:15 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:31.116 07:15:15 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:31.116 | .driver_specific 00:22:31.116 | .nvme_error 00:22:31.116 | .status_code 00:22:31.116 | .command_transient_transport_error' 00:22:31.116 07:15:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:31.375 07:15:15 -- host/digest.sh@71 -- # (( 222 > 0 )) 00:22:31.375 07:15:15 -- host/digest.sh@73 -- # killprocess 86451 00:22:31.375 07:15:15 -- common/autotest_common.sh@926 -- # '[' -z 86451 ']' 00:22:31.375 07:15:15 -- common/autotest_common.sh@930 -- # kill -0 86451 00:22:31.375 07:15:15 -- common/autotest_common.sh@931 -- # uname 00:22:31.375 07:15:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:31.375 07:15:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86451 00:22:31.375 07:15:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:31.375 07:15:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:31.375 killing process with pid 86451 00:22:31.375 07:15:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86451' 00:22:31.375 07:15:15 -- common/autotest_common.sh@945 -- # kill 86451 00:22:31.375 Received shutdown signal, test time was about 2.000000 seconds 00:22:31.375 00:22:31.375 Latency(us) 00:22:31.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.375 =================================================================================================================== 00:22:31.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.375 07:15:15 -- common/autotest_common.sh@950 -- # wait 86451 00:22:31.634 07:15:15 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:31.634 07:15:15 -- host/digest.sh@54 -- # local rw bs qd 00:22:31.634 07:15:15 -- host/digest.sh@56 -- # rw=randwrite 00:22:31.634 07:15:15 -- host/digest.sh@56 -- # bs=131072 00:22:31.634 07:15:15 -- host/digest.sh@56 -- # qd=16 00:22:31.634 07:15:15 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:31.634 07:15:15 -- host/digest.sh@58 -- # bperfpid=86549 00:22:31.634 07:15:15 -- host/digest.sh@60 -- # waitforlisten 86549 /var/tmp/bperf.sock 00:22:31.634 07:15:15 -- common/autotest_common.sh@819 -- # '[' -z 86549 ']' 00:22:31.634 07:15:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:31.634 07:15:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:31.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:31.634 07:15:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:31.634 07:15:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:31.634 07:15:15 -- common/autotest_common.sh@10 -- # set +x 00:22:31.634 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:31.634 Zero copy mechanism will not be used. 00:22:31.634 [2024-07-11 07:15:15.544701] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:31.634 [2024-07-11 07:15:15.544802] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86549 ] 00:22:31.634 [2024-07-11 07:15:15.674691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.892 [2024-07-11 07:15:15.757609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.459 07:15:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:32.459 07:15:16 -- common/autotest_common.sh@852 -- # return 0 00:22:32.459 07:15:16 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:32.459 07:15:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:32.718 07:15:16 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:32.718 07:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.718 07:15:16 -- common/autotest_common.sh@10 -- # set +x 00:22:32.718 07:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.718 07:15:16 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:32.718 07:15:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:32.977 nvme0n1 00:22:32.977 07:15:17 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:32.977 07:15:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.977 07:15:17 -- common/autotest_common.sh@10 -- # set +x 00:22:33.237 07:15:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.237 07:15:17 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:33.237 07:15:17 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:33.237 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:33.237 Zero copy mechanism will not be used. 00:22:33.237 Running I/O for 2 seconds... 00:22:33.237 [2024-07-11 07:15:17.128115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.128532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.128571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.132546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.132735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.132763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.136600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.136699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.136722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.140584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.140699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.140719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.144560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.144705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.144726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.148712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.148794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.148814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.152732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.152955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.152992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.156915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.157099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.157130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.161001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.161158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.161178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.165080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.165201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.165222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.169093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.169231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.169251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.173136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.173258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.173285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.177198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.177281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.177301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.181210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.181332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.181355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.185280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.185409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.185429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.189488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.189644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.189676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.193525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.193657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.193686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.197523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.197645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.197667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.201497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.201611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.201636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.205549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.205639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.205659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.209600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.209683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.237 [2024-07-11 07:15:17.209704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.237 [2024-07-11 07:15:17.213601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.237 [2024-07-11 07:15:17.213712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.213733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.217657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.217790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.217810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.221718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.221877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.221904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.225804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.225947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.225973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.229804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.229950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.229972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.233869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.233993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.234014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.238010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.238099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.238120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.241979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.242089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.242110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.246074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.246200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.246220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.250144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.250338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.250371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.254269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.254530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.254562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.258406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.258633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.258691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.262439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.262559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.262580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.266480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.266656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.266717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.270495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.270626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.270662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.274555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.274668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.274689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.278636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.278789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.278809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.282854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.282998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.283018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.287031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.287232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.287252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.238 [2024-07-11 07:15:17.291040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.238 [2024-07-11 07:15:17.291287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.238 [2024-07-11 07:15:17.291330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.295255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.295445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.295483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.299351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.299581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.299644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.303424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.303559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.303581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.307533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.307620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.307640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.311659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.311809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.311829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.315805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.315975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.315995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.320099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.320260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.320281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.324300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.324412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.324434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.328577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.328706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.328728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.332835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.332948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.332969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.337049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.337204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.337225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.341235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.341371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.341391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.345623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.345772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.345794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.349744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.349929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.349956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.353900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.354058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.354083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.357990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.358118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.358145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.362066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.362174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.362195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.366136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.366250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.366270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.370236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.370405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.370427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.374406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.374568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.374608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.378601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.378783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.499 [2024-07-11 07:15:17.378803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.499 [2024-07-11 07:15:17.382708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.499 [2024-07-11 07:15:17.382928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.382949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.386982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.387133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.387153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.391103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.391246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.391268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.395134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.395275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.395296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.399299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.399399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.399420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.403453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.403552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.403574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.407513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.407640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.407660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.411594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.411736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.411757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.415754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.415938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.415959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.419980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.420183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.420204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.424166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.424318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.424339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.428213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.428336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.428356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.432332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.432442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.432507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.436405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.436582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.436605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.440465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.440542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.440563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.444555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.444685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.444705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.448686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.448888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.448925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.452927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.453113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.453134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.456990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.457123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.457144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.460999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.461123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.461151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.465171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.465313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.465334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.469313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.469398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.469419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.473396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.500 [2024-07-11 07:15:17.473501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.500 [2024-07-11 07:15:17.473523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.500 [2024-07-11 07:15:17.477543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.477693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.477715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.481703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.482057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.482084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.485939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.486177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.486203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.490130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.490342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.490365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.494340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.494421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.494445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.498514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.498685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.498723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.502608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.502733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.502754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.506719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.506949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.506975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.511063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.511261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.511299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.515479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.515680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.515708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.519750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.519922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.519943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.524049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.524183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.524203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.528060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.528141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.528160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.532091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.532216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.532236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.536205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.536311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.536331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.540171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.540244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.540264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.544341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.544503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.544526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.548283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.548465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.548485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.501 [2024-07-11 07:15:17.552419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.501 [2024-07-11 07:15:17.552616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.501 [2024-07-11 07:15:17.552637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.761 [2024-07-11 07:15:17.556608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.761 [2024-07-11 07:15:17.556824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.761 [2024-07-11 07:15:17.556876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.761 [2024-07-11 07:15:17.560678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.761 [2024-07-11 07:15:17.560760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.761 [2024-07-11 07:15:17.560780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.761 [2024-07-11 07:15:17.564857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.564931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.564951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.568960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.569068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.569088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.573008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.573095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.573115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.577111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.577256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.577276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.581102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.581345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.581380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.585092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.585167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.585187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.589236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.589399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.589419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.593247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.593330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.593350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.597346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.597496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.597518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.601391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.601480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.601500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.605353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.605428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.605460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.609476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.609625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.609645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.613463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.613661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.613681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.617474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.617669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.617694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.621542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.621662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.621681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.625522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.625616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.625636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.629643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.629769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.629789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.633597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.633693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.633713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.637613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.637721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.637742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.641655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.641788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.641808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.645721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.645863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.645884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.649847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.649945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.649965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.653845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.653940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.762 [2024-07-11 07:15:17.653961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.762 [2024-07-11 07:15:17.657815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.762 [2024-07-11 07:15:17.657933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.657953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.661874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.661997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.662017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.665913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.665986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.666005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.669999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.670100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.670119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.674090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.674236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.674256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.678131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.678418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.678439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.682180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.682271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.682330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.686340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.686453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.686486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.690388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.690487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.690508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.694521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.694661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.694681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.698572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.698685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.698704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.702593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.702686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.702706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.706796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.706943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.706962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.710930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.711160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.711180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.714915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.715007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.715027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.719099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.719217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.719237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.723096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.723220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.723239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.727132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.727264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.727284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.731090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.731220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.731240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.735145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.735237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.735258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.739198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.739351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.739370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.743304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.743426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.743447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.747401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.747525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.747545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.751447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.751574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.751594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.755424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.755523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.755543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.763 [2024-07-11 07:15:17.759551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.763 [2024-07-11 07:15:17.759678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.763 [2024-07-11 07:15:17.759698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.763534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.763610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.763630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.767549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.767632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.767652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.771644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.771793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.771813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.775680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.775868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.775889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.779616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.779785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.779804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.783657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.783771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.783791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.787654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.787738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.787758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.791667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.791810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.791830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.795682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.795788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.795807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.799838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.799935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.799956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.803858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.804003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.804023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.807869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.808065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.808085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.811932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.812110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.812129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.764 [2024-07-11 07:15:17.816056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:33.764 [2024-07-11 07:15:17.816227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.764 [2024-07-11 07:15:17.816248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.820125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.820466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.820505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.824248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.824353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.824374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.828491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.828630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.828650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.832596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.832788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.832813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.836812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.836974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.836994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.840858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.841022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.841042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.844947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.845054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.845074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.849089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.849215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.849235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.853115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.853196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.853216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.857130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.857211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.857232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.861210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.861351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.861372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.865220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.865362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.865381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.869289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.869491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.869512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.873241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.873473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.873493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.877226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.877302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.877322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.881262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.881372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.881392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.885230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.885321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.885341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.889185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.889260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.889279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.893197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.893325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.893352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.897156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.897351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.897372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.901262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.901436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.901467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.905351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.905615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.905640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.909348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.909426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.909457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.913474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.913550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.913570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.917471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.917566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.917586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.921524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.921600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.921621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.925528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.925670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.925696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.025 [2024-07-11 07:15:17.929566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.025 [2024-07-11 07:15:17.929749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.025 [2024-07-11 07:15:17.929775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.933674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.933872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.933893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.937736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.938013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.938034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.941833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.941983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.942003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.945832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.945947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.945967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.949856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.949944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.949964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.953837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.953958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.953978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.957977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.958103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.958129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.962041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.962184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.962204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.966123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.966340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.966366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.970155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.970450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.970503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.974186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.974309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.974346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.978351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.978444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.978465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.982402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.982497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.982518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.986399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.986500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.986520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.990417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.990616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.990656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.994438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.994656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.994676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:17.998549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:17.998739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:17.998759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:18.002674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:18.002861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:18.002886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:18.006643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:18.006748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:18.006767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:18.010738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:18.010817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:18.010837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:18.014761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:18.014870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:18.014889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:18.018753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:18.018861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:18.018880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:18.022819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:18.022944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:18.022964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:18.026863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:18.027038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:18.027058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:18.030985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:18.031159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:18.031179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:18.034921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:18.035080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:18.035100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:18.038886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:18.038981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:18.039000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:18.042923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:18.043008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.026 [2024-07-11 07:15:18.043028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.026 [2024-07-11 07:15:18.046891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.026 [2024-07-11 07:15:18.046966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.027 [2024-07-11 07:15:18.046986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.027 [2024-07-11 07:15:18.050886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.027 [2024-07-11 07:15:18.050983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.027 [2024-07-11 07:15:18.051003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.027 [2024-07-11 07:15:18.054943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.027 [2024-07-11 07:15:18.055069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.027 [2024-07-11 07:15:18.055088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.027 [2024-07-11 07:15:18.058917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.027 [2024-07-11 07:15:18.059083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.027 [2024-07-11 07:15:18.059102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.027 [2024-07-11 07:15:18.063026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.027 [2024-07-11 07:15:18.063197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.027 [2024-07-11 07:15:18.063217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.027 [2024-07-11 07:15:18.066963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.027 [2024-07-11 07:15:18.067177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.027 [2024-07-11 07:15:18.067197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.027 [2024-07-11 07:15:18.070975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.027 [2024-07-11 07:15:18.071085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.027 [2024-07-11 07:15:18.071106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.027 [2024-07-11 07:15:18.074977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.027 [2024-07-11 07:15:18.075089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.027 [2024-07-11 07:15:18.075108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.027 [2024-07-11 07:15:18.079108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.027 [2024-07-11 07:15:18.079195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.027 [2024-07-11 07:15:18.079216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.287 [2024-07-11 07:15:18.083261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.287 [2024-07-11 07:15:18.083450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.287 [2024-07-11 07:15:18.083503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.287 [2024-07-11 07:15:18.087370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.287 [2024-07-11 07:15:18.087542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.287 [2024-07-11 07:15:18.087564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.287 [2024-07-11 07:15:18.091513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.287 [2024-07-11 07:15:18.091641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.287 [2024-07-11 07:15:18.091660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.287 [2024-07-11 07:15:18.095607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.287 [2024-07-11 07:15:18.095786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.287 [2024-07-11 07:15:18.095806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.287 [2024-07-11 07:15:18.099687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.287 [2024-07-11 07:15:18.099936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.287 [2024-07-11 07:15:18.099987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.287 [2024-07-11 07:15:18.103696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.287 [2024-07-11 07:15:18.103770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.287 [2024-07-11 07:15:18.103790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.287 [2024-07-11 07:15:18.107791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.287 [2024-07-11 07:15:18.107867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.287 [2024-07-11 07:15:18.107886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.287 [2024-07-11 07:15:18.111764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.287 [2024-07-11 07:15:18.111874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.287 [2024-07-11 07:15:18.111895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.287 [2024-07-11 07:15:18.115778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.287 [2024-07-11 07:15:18.115859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.287 [2024-07-11 07:15:18.115879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.287 [2024-07-11 07:15:18.119868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.287 [2024-07-11 07:15:18.119994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.120020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.123885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.124024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.124044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.128005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.128178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.128198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.132144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.132319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.132339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.136195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.136306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.136325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.140266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.140344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.140364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.144356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.144455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.144474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.148364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.148438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.148469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.152427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.152564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.152584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.156395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.156597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.156622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.160520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.160696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.160716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.164397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.164640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.164691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.168461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.168553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.168573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.172531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.172620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.172640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.176607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.176690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.176711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.180614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.180711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.180731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.184690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.184813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.184833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.188637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.188805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.188824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.192781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.192975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.192996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.196849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.197068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.197088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.200858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.200934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.200954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.204923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.205005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.205025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.208928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.209020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.209040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.212948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.213022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.213042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.217004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.217145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.217165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.221026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.221188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.221208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.225073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.225248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.225273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.229121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.229310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.229330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.233093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.288 [2024-07-11 07:15:18.233253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.288 [2024-07-11 07:15:18.233272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.288 [2024-07-11 07:15:18.237165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.237256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.237275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.241253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.241341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.241361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.245293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.245393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.245413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.249391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.249527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.249547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.253414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.253562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.253583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.257474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.257648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.257668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.261465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.261626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.261652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.265380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.265469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.265490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.269326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.269445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.269477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.273376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.273461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.273481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.277314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.277420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.277440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.281406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.281543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.281564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.285484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.285646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.285665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.289586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.289760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.289786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.293622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.293791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.293816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.297580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.297672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.297691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.301670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.301761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.301780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.305730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.305804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.305824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.309742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.309819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.309840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.313819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.313944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.313964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.317818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.318028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.318049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.321914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.322105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.322125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.325864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.326029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.326048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.329917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.330039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.330058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.334001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.334087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.334106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.338013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.338088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.338108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.289 [2024-07-11 07:15:18.342122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.289 [2024-07-11 07:15:18.342231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.289 [2024-07-11 07:15:18.342251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.346482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.346662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.346683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.350517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.350894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.350915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.354694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.354883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.354904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.358731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.358895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.358914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.362716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.362818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.362837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.366713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.366872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.366892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.370618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.370759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.370779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.374659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.374749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.374768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.378734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.378857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.378876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.382800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.382969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.382989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.386920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.387094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.387113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.390935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.391098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.391117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.394966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.395083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.395102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.398954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.399104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.399123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.403008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.403083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.403103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.407023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.407107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.407127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.411119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.411243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.411268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.415153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.415329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.415355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.419288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.419479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.550 [2024-07-11 07:15:18.419513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.550 [2024-07-11 07:15:18.423263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.550 [2024-07-11 07:15:18.423426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.423447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.427192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.427327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.427346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.431319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.431408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.431428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.435332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.435405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.435424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.439349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.439458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.439478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.443405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.443542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.443562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.447334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.447519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.447539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.451375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.451563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.451582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.455438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.455607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.455627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.459420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.459545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.459564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.463615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.463718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.463737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.467602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.467695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.467715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.471583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.471659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.471679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.475648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.475768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.475788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.479614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.479803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.479822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.483731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.483889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.483908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.487867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.488038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.488064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.491897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.492006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.492042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.496150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.496241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.496262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.500169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.500262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.500283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.504320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.504441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.504494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.508499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.508651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.508675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.512661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.513006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.513027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.516763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.517005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.517045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.520946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.521192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.521217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.525020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.525207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.525228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.529224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.529325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.529346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.533368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.533504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.533527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.537484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.537621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.537642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.551 [2024-07-11 07:15:18.541520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.551 [2024-07-11 07:15:18.541659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.551 [2024-07-11 07:15:18.541685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.545552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.545750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.545776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.549761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.549966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.549992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.553841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.554109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.554150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.557864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.557966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.557991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.561986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.562127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.562153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.566083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.566177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.566198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.570208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.570307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.570343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.574296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.574486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.574525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.578310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.578642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.578669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.582257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.582393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.582427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.586508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.586699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.586739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.590673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.590786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.590807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.594832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.594985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.595005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.598848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.598959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.598979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.602828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.602965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.602985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.552 [2024-07-11 07:15:18.607009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.552 [2024-07-11 07:15:18.607208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.552 [2024-07-11 07:15:18.607234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.611121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.611233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.611253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.615208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.615312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.615332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.619321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.619485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.619516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.623361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.623586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.623607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.627503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.627705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.627725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.631558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.631698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.631718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.635609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.635730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.635751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.639608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.639749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.639769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.643707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.643807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.643827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.647701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.647791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.647811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.651813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.651964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.651990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.655833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.656084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.656110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.659903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.660025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.660045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.664076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.664240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.813 [2024-07-11 07:15:18.664260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.813 [2024-07-11 07:15:18.668159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.813 [2024-07-11 07:15:18.668235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.668271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.672236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.672365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.672385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.676292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.676404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.676424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.680361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.680520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.680541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.684544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.684710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.684731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.688557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.688697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.688717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.692537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.692636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.692657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.696675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.696817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.696837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.700659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.700777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.700799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.704739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.704867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.704887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.708727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.708805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.708825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.712825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.712959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.712979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.716955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.717109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.717129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.720981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.721109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.721130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.724977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.725084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.725104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.729086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.729234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.729261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.733048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.733233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.733254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.737189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.737383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.737402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.741185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.741344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.741364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.745287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.745373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.745392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.749286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.749457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.749477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.753368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.753479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.753511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.757383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.757487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.757506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.761421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.761590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.761610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.765475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.765659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.765679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.769519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.769692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.769712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.773608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.773728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.773761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.777580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.777662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.777683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.781550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.814 [2024-07-11 07:15:18.781682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.814 [2024-07-11 07:15:18.781701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.814 [2024-07-11 07:15:18.785565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.785672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.785692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.789582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.789661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.789681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.793697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.793842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.793862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.797778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.798002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.798022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.802105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.802239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.802265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.806153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.806259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.806302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.810214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.810331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.810352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.814307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.814471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.814498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.818379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.818588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.818640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.822399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.822516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.822538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.826548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.826775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.826796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.830699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.830931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.830952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.834879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.835053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.835072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.839015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.839177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.839197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.843010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.843090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.843110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.847118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.847257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.847276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.851191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.851279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.851299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.855284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.855375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.855394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.859456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.859619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.859639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.863583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.863791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.863811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.815 [2024-07-11 07:15:18.867826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:34.815 [2024-07-11 07:15:18.868027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.815 [2024-07-11 07:15:18.868048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.872100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.872248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.872269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.876202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.876330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.876350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.880424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.880594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.880613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.884493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.884601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.884620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.888681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.888754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.888773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.892862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.893008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.893027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.896906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.897123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.897149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.901076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.901255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.901275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.905175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.905298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.905318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.909197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.909277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.909297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.913303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.913446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.913480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.917418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.917569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.917591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.921521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.921644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.921664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.925618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.925767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.925786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.929657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.929901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.929936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.933609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.933689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.933709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.937707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.937871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.937891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.941775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.941903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.941924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.945949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.946099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.946149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.950023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.950130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.950155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.954015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.954094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.076 [2024-07-11 07:15:18.954114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.076 [2024-07-11 07:15:18.958055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.076 [2024-07-11 07:15:18.958204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:18.958254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:18.962083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:18.962249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:18.962268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:18.966178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:18.966397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:18.966418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:18.970108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:18.970212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:18.970232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:18.974084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:18.974164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:18.974183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:18.978173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:18.978346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:18.978367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:18.982251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:18.982382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:18.982403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:18.986359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:18.986461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:18.986496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:18.990497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:18.990699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:18.990750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:18.994557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:18.994789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:18.994820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:18.998562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:18.998788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:18.998808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.002724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.002891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.002910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.006706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.006859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.006879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.010742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.010870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.010890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.014848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.014953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.014973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.018856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.018963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.018983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.022956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.023118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.023138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.027069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.027259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.027278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.031185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.031335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.031356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.035219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.035322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.035341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.039275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.039349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.039368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.043342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.043465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.043497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.047403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.047539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.047560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.051422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.051538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.051558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.055573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.055721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.055741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.059507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.059635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.059654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.063532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.063606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.063627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.067655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.067760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.067779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.071661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.071738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.077 [2024-07-11 07:15:19.071758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.077 [2024-07-11 07:15:19.075752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.077 [2024-07-11 07:15:19.075882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.078 [2024-07-11 07:15:19.075901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.078 [2024-07-11 07:15:19.079719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.078 [2024-07-11 07:15:19.079795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.078 [2024-07-11 07:15:19.079815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.078 [2024-07-11 07:15:19.083725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.078 [2024-07-11 07:15:19.083831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.078 [2024-07-11 07:15:19.083850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.078 [2024-07-11 07:15:19.087851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.078 [2024-07-11 07:15:19.088000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.078 [2024-07-11 07:15:19.088019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.078 [2024-07-11 07:15:19.091850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.078 [2024-07-11 07:15:19.092053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.078 [2024-07-11 07:15:19.092072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.078 [2024-07-11 07:15:19.095857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.078 [2024-07-11 07:15:19.096067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.078 [2024-07-11 07:15:19.096087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.078 [2024-07-11 07:15:19.099853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.078 [2024-07-11 07:15:19.099957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.078 [2024-07-11 07:15:19.099976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.078 [2024-07-11 07:15:19.103900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.078 [2024-07-11 07:15:19.103998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.078 [2024-07-11 07:15:19.104018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.078 [2024-07-11 07:15:19.107962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.078 [2024-07-11 07:15:19.108116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.078 [2024-07-11 07:15:19.108136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.078 [2024-07-11 07:15:19.112082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.078 [2024-07-11 07:15:19.112160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.078 [2024-07-11 07:15:19.112180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.078 [2024-07-11 07:15:19.116076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1201f10) with pdu=0x2000190fef90 00:22:35.078 [2024-07-11 07:15:19.116146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.078 [2024-07-11 07:15:19.116166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.078 00:22:35.078 Latency(us) 00:22:35.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.078 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:35.078 nvme0n1 : 2.00 7579.77 947.47 0.00 0.00 2106.40 1571.37 10307.03 00:22:35.078 =================================================================================================================== 00:22:35.078 Total : 7579.77 947.47 0.00 0.00 2106.40 1571.37 10307.03 00:22:35.078 0 00:22:35.337 07:15:19 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:35.337 07:15:19 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:35.337 07:15:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:35.337 07:15:19 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:35.337 | .driver_specific 00:22:35.337 | .nvme_error 00:22:35.337 | .status_code 00:22:35.337 | .command_transient_transport_error' 00:22:35.337 07:15:19 -- host/digest.sh@71 -- # (( 489 > 0 )) 00:22:35.337 07:15:19 -- host/digest.sh@73 -- # killprocess 86549 00:22:35.337 07:15:19 -- common/autotest_common.sh@926 -- # '[' -z 86549 ']' 00:22:35.337 07:15:19 -- common/autotest_common.sh@930 -- # kill -0 86549 00:22:35.337 07:15:19 -- common/autotest_common.sh@931 -- # uname 00:22:35.337 07:15:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:35.337 07:15:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86549 00:22:35.595 07:15:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:35.595 07:15:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:35.596 07:15:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86549' 00:22:35.596 killing process with pid 86549 00:22:35.596 07:15:19 -- common/autotest_common.sh@945 -- # kill 86549 00:22:35.596 Received shutdown signal, test time was about 2.000000 seconds 00:22:35.596 00:22:35.596 Latency(us) 00:22:35.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.596 =================================================================================================================== 00:22:35.596 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.596 07:15:19 -- common/autotest_common.sh@950 -- # wait 86549 00:22:35.596 07:15:19 -- host/digest.sh@115 -- # killprocess 86236 00:22:35.596 07:15:19 -- common/autotest_common.sh@926 -- # '[' -z 86236 ']' 00:22:35.596 07:15:19 -- common/autotest_common.sh@930 -- # kill -0 86236 00:22:35.596 07:15:19 -- common/autotest_common.sh@931 -- # uname 00:22:35.596 07:15:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:35.596 07:15:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86236 00:22:35.596 07:15:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:35.596 killing process with pid 86236 00:22:35.596 07:15:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:35.596 07:15:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86236' 00:22:35.596 07:15:19 -- common/autotest_common.sh@945 -- # kill 86236 00:22:35.596 07:15:19 -- common/autotest_common.sh@950 -- # wait 86236 00:22:36.163 00:22:36.163 real 0m18.168s 00:22:36.163 user 0m33.123s 00:22:36.163 sys 0m5.365s 00:22:36.163 07:15:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:36.163 ************************************ 00:22:36.163 END TEST nvmf_digest_error 00:22:36.163 ************************************ 00:22:36.163 07:15:19 -- common/autotest_common.sh@10 -- # set +x 00:22:36.163 07:15:19 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:22:36.163 07:15:19 -- host/digest.sh@139 -- # nvmftestfini 00:22:36.163 07:15:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:36.163 07:15:19 -- nvmf/common.sh@116 -- # sync 00:22:36.163 07:15:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:36.163 07:15:20 -- nvmf/common.sh@119 -- # set +e 00:22:36.163 07:15:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:36.163 07:15:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:36.163 rmmod nvme_tcp 00:22:36.163 rmmod nvme_fabrics 00:22:36.163 rmmod nvme_keyring 00:22:36.163 07:15:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:36.163 07:15:20 -- nvmf/common.sh@123 -- # set -e 00:22:36.163 07:15:20 -- nvmf/common.sh@124 -- # return 0 00:22:36.163 07:15:20 -- nvmf/common.sh@477 -- # '[' -n 86236 ']' 00:22:36.163 07:15:20 -- nvmf/common.sh@478 -- # killprocess 86236 00:22:36.163 07:15:20 -- common/autotest_common.sh@926 -- # '[' -z 86236 ']' 00:22:36.163 07:15:20 -- common/autotest_common.sh@930 -- # kill -0 86236 00:22:36.163 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (86236) - No such process 00:22:36.163 Process with pid 86236 is not found 00:22:36.163 07:15:20 -- common/autotest_common.sh@953 -- # echo 'Process with pid 86236 is not found' 00:22:36.163 07:15:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:36.163 07:15:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:36.163 07:15:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:36.163 07:15:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:36.163 07:15:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:36.163 07:15:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.163 07:15:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.163 07:15:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.163 07:15:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:36.163 00:22:36.163 real 0m37.193s 00:22:36.163 user 1m6.552s 00:22:36.163 sys 0m10.961s 00:22:36.163 07:15:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:36.163 ************************************ 00:22:36.163 END TEST nvmf_digest 00:22:36.163 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:22:36.163 ************************************ 00:22:36.163 07:15:20 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:22:36.163 07:15:20 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:22:36.163 07:15:20 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:36.163 07:15:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:36.163 07:15:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:36.163 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:22:36.163 ************************************ 00:22:36.163 START TEST nvmf_mdns_discovery 00:22:36.163 ************************************ 00:22:36.163 07:15:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:36.422 * Looking for test storage... 00:22:36.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:36.422 07:15:20 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:36.422 07:15:20 -- nvmf/common.sh@7 -- # uname -s 00:22:36.422 07:15:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.422 07:15:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.422 07:15:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.422 07:15:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.423 07:15:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.423 07:15:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.423 07:15:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.423 07:15:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.423 07:15:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.423 07:15:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.423 07:15:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:22:36.423 07:15:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:22:36.423 07:15:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.423 07:15:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.423 07:15:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:36.423 07:15:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:36.423 07:15:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.423 07:15:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.423 07:15:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.423 07:15:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.423 07:15:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.423 07:15:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.423 07:15:20 -- paths/export.sh@5 -- # export PATH 00:22:36.423 07:15:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.423 07:15:20 -- nvmf/common.sh@46 -- # : 0 00:22:36.423 07:15:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:36.423 07:15:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:36.423 07:15:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:36.423 07:15:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.423 07:15:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.423 07:15:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:36.423 07:15:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:36.423 07:15:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:36.423 07:15:20 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:22:36.423 07:15:20 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:22:36.423 07:15:20 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:36.423 07:15:20 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:36.423 07:15:20 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:22:36.423 07:15:20 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:36.423 07:15:20 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:22:36.423 07:15:20 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:22:36.423 07:15:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:36.423 07:15:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.423 07:15:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:36.423 07:15:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:36.423 07:15:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:36.423 07:15:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.423 07:15:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.423 07:15:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.423 07:15:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:36.423 07:15:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:36.423 07:15:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:36.423 07:15:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:36.423 07:15:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:36.423 07:15:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:36.423 07:15:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.423 07:15:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.423 07:15:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:36.423 07:15:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:36.423 07:15:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:36.423 07:15:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:36.423 07:15:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:36.423 07:15:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.423 07:15:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:36.423 07:15:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:36.423 07:15:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:36.423 07:15:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:36.423 07:15:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:36.423 07:15:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:36.423 Cannot find device "nvmf_tgt_br" 00:22:36.423 07:15:20 -- nvmf/common.sh@154 -- # true 00:22:36.423 07:15:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:36.423 Cannot find device "nvmf_tgt_br2" 00:22:36.423 07:15:20 -- nvmf/common.sh@155 -- # true 00:22:36.423 07:15:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:36.423 07:15:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:36.423 Cannot find device "nvmf_tgt_br" 00:22:36.423 07:15:20 -- nvmf/common.sh@157 -- # true 00:22:36.423 07:15:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:36.423 Cannot find device "nvmf_tgt_br2" 00:22:36.423 07:15:20 -- nvmf/common.sh@158 -- # true 00:22:36.423 07:15:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:36.423 07:15:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:36.423 07:15:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:36.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:36.423 07:15:20 -- nvmf/common.sh@161 -- # true 00:22:36.423 07:15:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:36.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:36.423 07:15:20 -- nvmf/common.sh@162 -- # true 00:22:36.423 07:15:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:36.423 07:15:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:36.423 07:15:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:36.423 07:15:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:36.423 07:15:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:36.682 07:15:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:36.682 07:15:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:36.682 07:15:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:36.682 07:15:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:36.682 07:15:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:36.682 07:15:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:36.682 07:15:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:36.682 07:15:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:36.682 07:15:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:36.682 07:15:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:36.682 07:15:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:36.682 07:15:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:36.682 07:15:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:36.682 07:15:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:36.682 07:15:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:36.682 07:15:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:36.682 07:15:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:36.682 07:15:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:36.682 07:15:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:36.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:22:36.682 00:22:36.682 --- 10.0.0.2 ping statistics --- 00:22:36.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.682 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:36.682 07:15:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:36.682 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:36.682 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:22:36.682 00:22:36.682 --- 10.0.0.3 ping statistics --- 00:22:36.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.682 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:22:36.682 07:15:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:36.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:22:36.682 00:22:36.682 --- 10.0.0.1 ping statistics --- 00:22:36.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.682 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:22:36.682 07:15:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.682 07:15:20 -- nvmf/common.sh@421 -- # return 0 00:22:36.682 07:15:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:36.682 07:15:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.682 07:15:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:36.682 07:15:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:36.682 07:15:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.682 07:15:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:36.682 07:15:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:36.682 07:15:20 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:36.682 07:15:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:36.682 07:15:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:36.682 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:22:36.682 07:15:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:36.682 07:15:20 -- nvmf/common.sh@469 -- # nvmfpid=86843 00:22:36.682 07:15:20 -- nvmf/common.sh@470 -- # waitforlisten 86843 00:22:36.682 07:15:20 -- common/autotest_common.sh@819 -- # '[' -z 86843 ']' 00:22:36.682 07:15:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.682 07:15:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:36.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.682 07:15:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.682 07:15:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:36.682 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:22:36.682 [2024-07-11 07:15:20.711827] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:36.682 [2024-07-11 07:15:20.711914] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.941 [2024-07-11 07:15:20.854118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.941 [2024-07-11 07:15:20.963315] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:36.941 [2024-07-11 07:15:20.963502] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.941 [2024-07-11 07:15:20.963521] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.941 [2024-07-11 07:15:20.963534] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.941 [2024-07-11 07:15:20.963573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.876 07:15:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:37.876 07:15:21 -- common/autotest_common.sh@852 -- # return 0 00:22:37.876 07:15:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:37.876 07:15:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:37.876 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:37.876 07:15:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.876 07:15:21 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:22:37.876 07:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.876 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:37.876 07:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.876 07:15:21 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:22:37.876 07:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.876 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:37.876 07:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.876 07:15:21 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:37.876 07:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.876 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:37.876 [2024-07-11 07:15:21.832061] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.876 07:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.876 07:15:21 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:37.876 07:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.876 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:37.876 [2024-07-11 07:15:21.840182] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:37.876 07:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.876 07:15:21 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:37.876 07:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.876 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:37.876 null0 00:22:37.876 07:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.876 07:15:21 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:37.876 07:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.876 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:37.876 null1 00:22:37.876 07:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.876 07:15:21 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:22:37.876 07:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.876 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:37.876 null2 00:22:37.876 07:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.876 07:15:21 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:22:37.876 07:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.876 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:37.876 null3 00:22:37.876 07:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.876 07:15:21 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:22:37.876 07:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.876 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:37.876 07:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.876 07:15:21 -- host/mdns_discovery.sh@47 -- # hostpid=86893 00:22:37.876 07:15:21 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:37.876 07:15:21 -- host/mdns_discovery.sh@48 -- # waitforlisten 86893 /tmp/host.sock 00:22:37.876 07:15:21 -- common/autotest_common.sh@819 -- # '[' -z 86893 ']' 00:22:37.877 07:15:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:22:37.877 07:15:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:37.877 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:37.877 07:15:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:37.877 07:15:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:37.877 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:38.135 [2024-07-11 07:15:21.945348] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:38.135 [2024-07-11 07:15:21.945468] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86893 ] 00:22:38.135 [2024-07-11 07:15:22.086239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.394 [2024-07-11 07:15:22.194051] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:38.394 [2024-07-11 07:15:22.194290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.961 07:15:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:38.961 07:15:22 -- common/autotest_common.sh@852 -- # return 0 00:22:38.961 07:15:22 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:22:38.961 07:15:22 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:22:38.961 07:15:22 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:22:39.219 07:15:23 -- host/mdns_discovery.sh@57 -- # avahipid=86922 00:22:39.219 07:15:23 -- host/mdns_discovery.sh@58 -- # sleep 1 00:22:39.219 07:15:23 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:22:39.219 07:15:23 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:22:39.219 Process 979 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:22:39.219 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:22:39.219 Successfully dropped root privileges. 00:22:39.219 avahi-daemon 0.8 starting up. 00:22:39.219 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:22:39.219 Successfully called chroot(). 00:22:39.219 Successfully dropped remaining capabilities. 00:22:39.219 No service file found in /etc/avahi/services. 00:22:40.152 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:40.152 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:22:40.152 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:40.152 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:22:40.152 Network interface enumeration completed. 00:22:40.152 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:22:40.152 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:22:40.152 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:22:40.152 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:22:40.152 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 1392567629. 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:40.152 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.152 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.152 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:40.152 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.152 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.152 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.152 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:40.152 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@68 -- # sort 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@68 -- # xargs 00:22:40.152 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.152 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@64 -- # sort 00:22:40.152 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@64 -- # xargs 00:22:40.152 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:40.152 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.152 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.152 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:40.152 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@68 -- # sort 00:22:40.152 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.152 07:15:24 -- host/mdns_discovery.sh@68 -- # xargs 00:22:40.152 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.446 07:15:24 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:40.447 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.447 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@64 -- # sort 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@64 -- # xargs 00:22:40.447 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:40.447 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.447 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.447 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.447 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@68 -- # xargs 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@68 -- # sort 00:22:40.447 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.447 [2024-07-11 07:15:24.342642] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.447 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.447 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@64 -- # sort 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@64 -- # xargs 00:22:40.447 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:40.447 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.447 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 [2024-07-11 07:15:24.404904] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.447 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:40.447 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.447 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:22:40.447 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.447 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:22:40.447 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.447 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:22:40.447 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.447 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:40.447 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.447 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 [2024-07-11 07:15:24.444799] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:40.447 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:40.447 07:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.447 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:22:40.447 [2024-07-11 07:15:24.452805] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:40.447 07:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=86973 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:22:40.447 07:15:24 -- host/mdns_discovery.sh@125 -- # sleep 5 00:22:41.386 [2024-07-11 07:15:25.242653] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:41.644 Established under name 'CDC' 00:22:41.644 [2024-07-11 07:15:25.642671] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:41.644 [2024-07-11 07:15:25.642695] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:22:41.644 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:41.644 cookie is 0 00:22:41.644 is_local: 1 00:22:41.644 our_own: 0 00:22:41.644 wide_area: 0 00:22:41.644 multicast: 1 00:22:41.644 cached: 1 00:22:41.901 [2024-07-11 07:15:25.742657] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:41.901 [2024-07-11 07:15:25.742677] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:22:41.901 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:41.901 cookie is 0 00:22:41.901 is_local: 1 00:22:41.901 our_own: 0 00:22:41.901 wide_area: 0 00:22:41.901 multicast: 1 00:22:41.901 cached: 1 00:22:42.836 [2024-07-11 07:15:26.656297] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:42.836 [2024-07-11 07:15:26.656322] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:42.836 [2024-07-11 07:15:26.656344] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:42.836 [2024-07-11 07:15:26.742396] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:22:42.836 [2024-07-11 07:15:26.745952] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:42.836 [2024-07-11 07:15:26.745970] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:42.836 [2024-07-11 07:15:26.745989] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:42.836 [2024-07-11 07:15:26.799573] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:42.836 [2024-07-11 07:15:26.799598] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:42.836 [2024-07-11 07:15:26.831701] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:22:42.836 [2024-07-11 07:15:26.886167] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:42.836 [2024-07-11 07:15:26.886191] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:46.123 07:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.123 07:15:29 -- common/autotest_common.sh@10 -- # set +x 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@80 -- # sort 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@80 -- # xargs 00:22:46.123 07:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:46.123 07:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.123 07:15:29 -- common/autotest_common.sh@10 -- # set +x 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@76 -- # sort 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@76 -- # xargs 00:22:46.123 07:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:46.123 07:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.123 07:15:29 -- common/autotest_common.sh@10 -- # set +x 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@68 -- # sort 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@68 -- # xargs 00:22:46.123 07:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:46.123 07:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@64 -- # sort 00:22:46.123 07:15:29 -- common/autotest_common.sh@10 -- # set +x 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@64 -- # xargs 00:22:46.123 07:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:46.123 07:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:46.123 07:15:29 -- common/autotest_common.sh@10 -- # set +x 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@72 -- # xargs 00:22:46.123 07:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:46.123 07:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.123 07:15:29 -- common/autotest_common.sh@10 -- # set +x 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@72 -- # xargs 00:22:46.123 07:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:46.123 07:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.123 07:15:29 -- common/autotest_common.sh@10 -- # set +x 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:46.123 07:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:46.123 07:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.123 07:15:29 -- common/autotest_common.sh@10 -- # set +x 00:22:46.123 07:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:22:46.123 07:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.123 07:15:29 -- common/autotest_common.sh@10 -- # set +x 00:22:46.123 07:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.123 07:15:29 -- host/mdns_discovery.sh@139 -- # sleep 1 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:47.058 07:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.058 07:15:30 -- common/autotest_common.sh@10 -- # set +x 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@64 -- # sort 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@64 -- # xargs 00:22:47.058 07:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:47.058 07:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.058 07:15:30 -- common/autotest_common.sh@10 -- # set +x 00:22:47.058 07:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:47.058 07:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.058 07:15:30 -- common/autotest_common.sh@10 -- # set +x 00:22:47.058 [2024-07-11 07:15:30.979531] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:47.058 [2024-07-11 07:15:30.979909] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:47.058 [2024-07-11 07:15:30.979944] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:47.058 [2024-07-11 07:15:30.979976] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:47.058 [2024-07-11 07:15:30.979989] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:47.058 07:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.058 07:15:30 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:22:47.058 07:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.058 07:15:30 -- common/autotest_common.sh@10 -- # set +x 00:22:47.058 [2024-07-11 07:15:30.987425] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:47.058 [2024-07-11 07:15:30.987903] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:47.058 [2024-07-11 07:15:30.987948] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:47.058 07:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.059 07:15:30 -- host/mdns_discovery.sh@149 -- # sleep 1 00:22:47.317 [2024-07-11 07:15:31.118976] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:22:47.317 [2024-07-11 07:15:31.119138] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:22:47.317 [2024-07-11 07:15:31.179197] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:47.317 [2024-07-11 07:15:31.179219] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:47.317 [2024-07-11 07:15:31.179225] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:47.317 [2024-07-11 07:15:31.179240] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:47.317 [2024-07-11 07:15:31.179304] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:47.317 [2024-07-11 07:15:31.179313] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:47.317 [2024-07-11 07:15:31.179317] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:47.317 [2024-07-11 07:15:31.179329] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:47.317 [2024-07-11 07:15:31.225059] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:47.318 [2024-07-11 07:15:31.225076] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:47.318 [2024-07-11 07:15:31.225115] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:47.318 [2024-07-11 07:15:31.225123] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:48.253 07:15:31 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:22:48.253 07:15:31 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:48.253 07:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.253 07:15:31 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:48.253 07:15:31 -- common/autotest_common.sh@10 -- # set +x 00:22:48.253 07:15:31 -- host/mdns_discovery.sh@68 -- # sort 00:22:48.253 07:15:31 -- host/mdns_discovery.sh@68 -- # xargs 00:22:48.253 07:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.253 07:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.253 07:15:32 -- common/autotest_common.sh@10 -- # set +x 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@64 -- # sort 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@64 -- # xargs 00:22:48.253 07:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:48.253 07:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.253 07:15:32 -- common/autotest_common.sh@10 -- # set +x 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@72 -- # xargs 00:22:48.253 07:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:48.253 07:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.253 07:15:32 -- common/autotest_common.sh@10 -- # set +x 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@72 -- # xargs 00:22:48.253 07:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:48.253 07:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:48.253 07:15:32 -- common/autotest_common.sh@10 -- # set +x 00:22:48.253 07:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:22:48.253 07:15:32 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:48.253 07:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.253 07:15:32 -- common/autotest_common.sh@10 -- # set +x 00:22:48.253 [2024-07-11 07:15:32.284963] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:48.253 [2024-07-11 07:15:32.284991] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:48.253 [2024-07-11 07:15:32.285020] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:48.253 [2024-07-11 07:15:32.285032] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:48.253 [2024-07-11 07:15:32.285871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.253 [2024-07-11 07:15:32.285902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.253 [2024-07-11 07:15:32.285913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.254 [2024-07-11 07:15:32.285921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.254 [2024-07-11 07:15:32.285930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.254 [2024-07-11 07:15:32.285938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.254 [2024-07-11 07:15:32.285946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.254 [2024-07-11 07:15:32.285954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.254 [2024-07-11 07:15:32.285962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.254 07:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.254 07:15:32 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:48.254 07:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.254 07:15:32 -- common/autotest_common.sh@10 -- # set +x 00:22:48.254 [2024-07-11 07:15:32.295831] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.254 [2024-07-11 07:15:32.297093] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:48.254 [2024-07-11 07:15:32.297280] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:48.254 [2024-07-11 07:15:32.298371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.254 [2024-07-11 07:15:32.298564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.254 [2024-07-11 07:15:32.298683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.254 [2024-07-11 07:15:32.298696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.254 [2024-07-11 07:15:32.298707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.254 [2024-07-11 07:15:32.298715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.254 [2024-07-11 07:15:32.298724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.254 [2024-07-11 07:15:32.298731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.254 [2024-07-11 07:15:32.298739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfae20 is same with the state(5) to be set 00:22:48.254 07:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.254 07:15:32 -- host/mdns_discovery.sh@162 -- # sleep 1 00:22:48.254 [2024-07-11 07:15:32.305848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.254 [2024-07-11 07:15:32.305961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.254 [2024-07-11 07:15:32.306005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.254 [2024-07-11 07:15:32.306022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0deb0 with addr=10.0.0.2, port=4420 00:22:48.254 [2024-07-11 07:15:32.306032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.254 [2024-07-11 07:15:32.306048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.254 [2024-07-11 07:15:32.306061] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.254 [2024-07-11 07:15:32.306069] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.254 [2024-07-11 07:15:32.306078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.254 [2024-07-11 07:15:32.306108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.254 [2024-07-11 07:15:32.308338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfae20 (9): Bad file descriptor 00:22:48.513 [2024-07-11 07:15:32.315911] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.513 [2024-07-11 07:15:32.315984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.513 [2024-07-11 07:15:32.316025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.513 [2024-07-11 07:15:32.316040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0deb0 with addr=10.0.0.2, port=4420 00:22:48.513 [2024-07-11 07:15:32.316049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.513 [2024-07-11 07:15:32.316064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.513 [2024-07-11 07:15:32.316076] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.513 [2024-07-11 07:15:32.316083] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.513 [2024-07-11 07:15:32.316092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.513 [2024-07-11 07:15:32.316118] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.513 [2024-07-11 07:15:32.318359] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:48.513 [2024-07-11 07:15:32.318432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.513 [2024-07-11 07:15:32.318491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.513 [2024-07-11 07:15:32.318508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfae20 with addr=10.0.0.3, port=4420 00:22:48.513 [2024-07-11 07:15:32.318518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfae20 is same with the state(5) to be set 00:22:48.513 [2024-07-11 07:15:32.318533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfae20 (9): Bad file descriptor 00:22:48.513 [2024-07-11 07:15:32.318545] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:48.513 [2024-07-11 07:15:32.318553] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:48.513 [2024-07-11 07:15:32.318561] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:48.513 [2024-07-11 07:15:32.318574] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.513 [2024-07-11 07:15:32.325957] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.513 [2024-07-11 07:15:32.326029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.513 [2024-07-11 07:15:32.326069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.513 [2024-07-11 07:15:32.326084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0deb0 with addr=10.0.0.2, port=4420 00:22:48.513 [2024-07-11 07:15:32.326093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.513 [2024-07-11 07:15:32.326107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.513 [2024-07-11 07:15:32.326133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.513 [2024-07-11 07:15:32.326143] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.513 [2024-07-11 07:15:32.326151] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.513 [2024-07-11 07:15:32.326164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.513 [2024-07-11 07:15:32.328404] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:48.513 [2024-07-11 07:15:32.328488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.513 [2024-07-11 07:15:32.328530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.513 [2024-07-11 07:15:32.328544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfae20 with addr=10.0.0.3, port=4420 00:22:48.513 [2024-07-11 07:15:32.328554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfae20 is same with the state(5) to be set 00:22:48.513 [2024-07-11 07:15:32.328568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfae20 (9): Bad file descriptor 00:22:48.513 [2024-07-11 07:15:32.328581] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:48.513 [2024-07-11 07:15:32.328589] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:48.513 [2024-07-11 07:15:32.328597] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:48.513 [2024-07-11 07:15:32.328611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.513 [2024-07-11 07:15:32.336002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.513 [2024-07-11 07:15:32.336071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.513 [2024-07-11 07:15:32.336111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.513 [2024-07-11 07:15:32.336125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0deb0 with addr=10.0.0.2, port=4420 00:22:48.513 [2024-07-11 07:15:32.336134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.513 [2024-07-11 07:15:32.336149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.513 [2024-07-11 07:15:32.336174] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.513 [2024-07-11 07:15:32.336183] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.513 [2024-07-11 07:15:32.336192] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.513 [2024-07-11 07:15:32.336204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.513 [2024-07-11 07:15:32.338449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:48.513 [2024-07-11 07:15:32.338529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.513 [2024-07-11 07:15:32.338572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.513 [2024-07-11 07:15:32.338600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfae20 with addr=10.0.0.3, port=4420 00:22:48.513 [2024-07-11 07:15:32.338610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfae20 is same with the state(5) to be set 00:22:48.513 [2024-07-11 07:15:32.338624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfae20 (9): Bad file descriptor 00:22:48.513 [2024-07-11 07:15:32.338636] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:48.513 [2024-07-11 07:15:32.338644] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:48.513 [2024-07-11 07:15:32.338652] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:48.513 [2024-07-11 07:15:32.338665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.513 [2024-07-11 07:15:32.346047] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.513 [2024-07-11 07:15:32.346124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.513 [2024-07-11 07:15:32.346166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.346181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0deb0 with addr=10.0.0.2, port=4420 00:22:48.514 [2024-07-11 07:15:32.346191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.514 [2024-07-11 07:15:32.346205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.514 [2024-07-11 07:15:32.346232] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.514 [2024-07-11 07:15:32.346242] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.514 [2024-07-11 07:15:32.346250] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.514 [2024-07-11 07:15:32.346263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.514 [2024-07-11 07:15:32.348503] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:48.514 [2024-07-11 07:15:32.348573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.348614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.348629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfae20 with addr=10.0.0.3, port=4420 00:22:48.514 [2024-07-11 07:15:32.348638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfae20 is same with the state(5) to be set 00:22:48.514 [2024-07-11 07:15:32.348652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfae20 (9): Bad file descriptor 00:22:48.514 [2024-07-11 07:15:32.348665] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:48.514 [2024-07-11 07:15:32.348672] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:48.514 [2024-07-11 07:15:32.348680] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:48.514 [2024-07-11 07:15:32.348693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.514 [2024-07-11 07:15:32.356094] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.514 [2024-07-11 07:15:32.356164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.356204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.356219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0deb0 with addr=10.0.0.2, port=4420 00:22:48.514 [2024-07-11 07:15:32.356227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.514 [2024-07-11 07:15:32.356242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.514 [2024-07-11 07:15:32.356267] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.514 [2024-07-11 07:15:32.356276] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.514 [2024-07-11 07:15:32.356284] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.514 [2024-07-11 07:15:32.356297] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.514 [2024-07-11 07:15:32.358549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:48.514 [2024-07-11 07:15:32.358623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.358665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.358679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfae20 with addr=10.0.0.3, port=4420 00:22:48.514 [2024-07-11 07:15:32.358689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfae20 is same with the state(5) to be set 00:22:48.514 [2024-07-11 07:15:32.358704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfae20 (9): Bad file descriptor 00:22:48.514 [2024-07-11 07:15:32.358717] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:48.514 [2024-07-11 07:15:32.358725] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:48.514 [2024-07-11 07:15:32.358733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:48.514 [2024-07-11 07:15:32.358746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.514 [2024-07-11 07:15:32.366139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.514 [2024-07-11 07:15:32.366209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.366249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.366264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0deb0 with addr=10.0.0.2, port=4420 00:22:48.514 [2024-07-11 07:15:32.366273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.514 [2024-07-11 07:15:32.366296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.514 [2024-07-11 07:15:32.366323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.514 [2024-07-11 07:15:32.366333] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.514 [2024-07-11 07:15:32.366341] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.514 [2024-07-11 07:15:32.366354] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.514 [2024-07-11 07:15:32.368595] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:48.514 [2024-07-11 07:15:32.368663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.368703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.368718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfae20 with addr=10.0.0.3, port=4420 00:22:48.514 [2024-07-11 07:15:32.368727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfae20 is same with the state(5) to be set 00:22:48.514 [2024-07-11 07:15:32.368742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfae20 (9): Bad file descriptor 00:22:48.514 [2024-07-11 07:15:32.368755] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:48.514 [2024-07-11 07:15:32.368762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:48.514 [2024-07-11 07:15:32.368770] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:48.514 [2024-07-11 07:15:32.368783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.514 [2024-07-11 07:15:32.376184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.514 [2024-07-11 07:15:32.376253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.376294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.376308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0deb0 with addr=10.0.0.2, port=4420 00:22:48.514 [2024-07-11 07:15:32.376318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.514 [2024-07-11 07:15:32.376332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.514 [2024-07-11 07:15:32.376357] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.514 [2024-07-11 07:15:32.376367] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.514 [2024-07-11 07:15:32.376375] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.514 [2024-07-11 07:15:32.376387] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.514 [2024-07-11 07:15:32.378637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:48.514 [2024-07-11 07:15:32.378705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.378746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.378761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfae20 with addr=10.0.0.3, port=4420 00:22:48.514 [2024-07-11 07:15:32.378770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfae20 is same with the state(5) to be set 00:22:48.514 [2024-07-11 07:15:32.378784] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfae20 (9): Bad file descriptor 00:22:48.514 [2024-07-11 07:15:32.378805] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:48.514 [2024-07-11 07:15:32.378814] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:48.514 [2024-07-11 07:15:32.378823] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:48.514 [2024-07-11 07:15:32.378835] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.514 [2024-07-11 07:15:32.386232] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.514 [2024-07-11 07:15:32.386484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.386635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.386685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0deb0 with addr=10.0.0.2, port=4420 00:22:48.514 [2024-07-11 07:15:32.386915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.514 [2024-07-11 07:15:32.387051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.514 [2024-07-11 07:15:32.387131] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.514 [2024-07-11 07:15:32.387252] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.514 [2024-07-11 07:15:32.387405] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.514 [2024-07-11 07:15:32.387428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.514 [2024-07-11 07:15:32.388682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:48.514 [2024-07-11 07:15:32.388757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.388800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.388815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfae20 with addr=10.0.0.3, port=4420 00:22:48.514 [2024-07-11 07:15:32.388825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfae20 is same with the state(5) to be set 00:22:48.514 [2024-07-11 07:15:32.388840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfae20 (9): Bad file descriptor 00:22:48.514 [2024-07-11 07:15:32.388853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:48.514 [2024-07-11 07:15:32.388860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:48.514 [2024-07-11 07:15:32.388869] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:48.514 [2024-07-11 07:15:32.388881] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.514 [2024-07-11 07:15:32.396436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.514 [2024-07-11 07:15:32.396514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.396556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.396571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0deb0 with addr=10.0.0.2, port=4420 00:22:48.514 [2024-07-11 07:15:32.396580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.514 [2024-07-11 07:15:32.396608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.514 [2024-07-11 07:15:32.396622] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.514 [2024-07-11 07:15:32.396630] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.514 [2024-07-11 07:15:32.396638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.514 [2024-07-11 07:15:32.396650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.514 [2024-07-11 07:15:32.398723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:48.514 [2024-07-11 07:15:32.398792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.398833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.514 [2024-07-11 07:15:32.398847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfae20 with addr=10.0.0.3, port=4420 00:22:48.514 [2024-07-11 07:15:32.398856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfae20 is same with the state(5) to be set 00:22:48.514 [2024-07-11 07:15:32.398871] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfae20 (9): Bad file descriptor 00:22:48.515 [2024-07-11 07:15:32.398883] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:48.515 [2024-07-11 07:15:32.398891] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:48.515 [2024-07-11 07:15:32.398899] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:48.515 [2024-07-11 07:15:32.398911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.515 [2024-07-11 07:15:32.406487] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.515 [2024-07-11 07:15:32.406558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.515 [2024-07-11 07:15:32.406598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.515 [2024-07-11 07:15:32.406613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0deb0 with addr=10.0.0.2, port=4420 00:22:48.515 [2024-07-11 07:15:32.406622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.515 [2024-07-11 07:15:32.406650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.515 [2024-07-11 07:15:32.406664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.515 [2024-07-11 07:15:32.406671] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.515 [2024-07-11 07:15:32.406679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.515 [2024-07-11 07:15:32.406692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.515 [2024-07-11 07:15:32.408765] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:48.515 [2024-07-11 07:15:32.408833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.515 [2024-07-11 07:15:32.408873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.515 [2024-07-11 07:15:32.408888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfae20 with addr=10.0.0.3, port=4420 00:22:48.515 [2024-07-11 07:15:32.408897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfae20 is same with the state(5) to be set 00:22:48.515 [2024-07-11 07:15:32.408911] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfae20 (9): Bad file descriptor 00:22:48.515 [2024-07-11 07:15:32.408923] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:48.515 [2024-07-11 07:15:32.408931] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:48.515 [2024-07-11 07:15:32.408939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:48.515 [2024-07-11 07:15:32.408951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.515 [2024-07-11 07:15:32.416532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.515 [2024-07-11 07:15:32.416613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.515 [2024-07-11 07:15:32.416653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.515 [2024-07-11 07:15:32.416667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0deb0 with addr=10.0.0.2, port=4420 00:22:48.515 [2024-07-11 07:15:32.416677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.515 [2024-07-11 07:15:32.416691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.515 [2024-07-11 07:15:32.416703] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.515 [2024-07-11 07:15:32.416710] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.515 [2024-07-11 07:15:32.416719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.515 [2024-07-11 07:15:32.416731] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.515 [2024-07-11 07:15:32.418807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:48.515 [2024-07-11 07:15:32.418874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.515 [2024-07-11 07:15:32.418914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.515 [2024-07-11 07:15:32.418929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfae20 with addr=10.0.0.3, port=4420 00:22:48.515 [2024-07-11 07:15:32.418938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfae20 is same with the state(5) to be set 00:22:48.515 [2024-07-11 07:15:32.418952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfae20 (9): Bad file descriptor 00:22:48.515 [2024-07-11 07:15:32.418964] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:48.515 [2024-07-11 07:15:32.418972] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:48.515 [2024-07-11 07:15:32.418980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:48.515 [2024-07-11 07:15:32.418993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.515 [2024-07-11 07:15:32.426576] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.515 [2024-07-11 07:15:32.426651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.515 [2024-07-11 07:15:32.426692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.515 [2024-07-11 07:15:32.426706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0deb0 with addr=10.0.0.2, port=4420 00:22:48.515 [2024-07-11 07:15:32.426715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0deb0 is same with the state(5) to be set 00:22:48.515 [2024-07-11 07:15:32.426729] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0deb0 (9): Bad file descriptor 00:22:48.515 [2024-07-11 07:15:32.426742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.515 [2024-07-11 07:15:32.426750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.515 [2024-07-11 07:15:32.426758] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.515 [2024-07-11 07:15:32.426771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.515 [2024-07-11 07:15:32.427569] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:22:48.515 [2024-07-11 07:15:32.427588] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:48.515 [2024-07-11 07:15:32.427606] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:48.515 [2024-07-11 07:15:32.429562] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:48.515 [2024-07-11 07:15:32.429585] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:48.515 [2024-07-11 07:15:32.429602] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:48.515 [2024-07-11 07:15:32.513630] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:48.515 [2024-07-11 07:15:32.515624] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:49.452 07:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:49.452 07:15:33 -- common/autotest_common.sh@10 -- # set +x 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@68 -- # sort 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@68 -- # xargs 00:22:49.452 07:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@64 -- # sort 00:22:49.452 07:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.452 07:15:33 -- common/autotest_common.sh@10 -- # set +x 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@64 -- # xargs 00:22:49.452 07:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:49.452 07:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@72 -- # xargs 00:22:49.452 07:15:33 -- common/autotest_common.sh@10 -- # set +x 00:22:49.452 07:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:49.452 07:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.452 07:15:33 -- host/mdns_discovery.sh@72 -- # xargs 00:22:49.452 07:15:33 -- common/autotest_common.sh@10 -- # set +x 00:22:49.452 07:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.709 07:15:33 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:22:49.709 07:15:33 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:22:49.709 07:15:33 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:49.709 07:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.710 07:15:33 -- common/autotest_common.sh@10 -- # set +x 00:22:49.710 07:15:33 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:49.710 07:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.710 07:15:33 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:49.710 07:15:33 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:49.710 07:15:33 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:22:49.710 07:15:33 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:22:49.710 07:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.710 07:15:33 -- common/autotest_common.sh@10 -- # set +x 00:22:49.710 07:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.710 07:15:33 -- host/mdns_discovery.sh@172 -- # sleep 1 00:22:49.710 [2024-07-11 07:15:33.642668] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:50.644 07:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.644 07:15:34 -- common/autotest_common.sh@10 -- # set +x 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@80 -- # sort 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@80 -- # xargs 00:22:50.644 07:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@68 -- # sort 00:22:50.644 07:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.644 07:15:34 -- common/autotest_common.sh@10 -- # set +x 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@68 -- # xargs 00:22:50.644 07:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.644 07:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@64 -- # xargs 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@64 -- # sort 00:22:50.644 07:15:34 -- common/autotest_common.sh@10 -- # set +x 00:22:50.644 07:15:34 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:50.644 07:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.903 07:15:34 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:22:50.903 07:15:34 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:22:50.903 07:15:34 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:50.903 07:15:34 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:50.903 07:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.903 07:15:34 -- common/autotest_common.sh@10 -- # set +x 00:22:50.903 07:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.903 07:15:34 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:22:50.903 07:15:34 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:22:50.903 07:15:34 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:22:50.903 07:15:34 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:50.903 07:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.903 07:15:34 -- common/autotest_common.sh@10 -- # set +x 00:22:50.903 07:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.903 07:15:34 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:50.903 07:15:34 -- common/autotest_common.sh@640 -- # local es=0 00:22:50.903 07:15:34 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:50.903 07:15:34 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:50.903 07:15:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:50.903 07:15:34 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:50.903 07:15:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:50.903 07:15:34 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:50.903 07:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.903 07:15:34 -- common/autotest_common.sh@10 -- # set +x 00:22:50.903 [2024-07-11 07:15:34.811713] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:22:50.903 2024/07/11 07:15:34 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:22:50.903 request: 00:22:50.903 { 00:22:50.903 "method": "bdev_nvme_start_mdns_discovery", 00:22:50.903 "params": { 00:22:50.903 "name": "mdns", 00:22:50.903 "svcname": "_nvme-disc._http", 00:22:50.903 "hostnqn": "nqn.2021-12.io.spdk:test" 00:22:50.903 } 00:22:50.903 } 00:22:50.903 Got JSON-RPC error response 00:22:50.903 GoRPCClient: error on JSON-RPC call 00:22:50.903 07:15:34 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:50.903 07:15:34 -- common/autotest_common.sh@643 -- # es=1 00:22:50.903 07:15:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:50.903 07:15:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:50.903 07:15:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:50.903 07:15:34 -- host/mdns_discovery.sh@183 -- # sleep 5 00:22:51.162 [2024-07-11 07:15:35.200314] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:51.421 [2024-07-11 07:15:35.300311] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:51.421 [2024-07-11 07:15:35.400315] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:51.421 [2024-07-11 07:15:35.400333] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:22:51.421 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:51.421 cookie is 0 00:22:51.421 is_local: 1 00:22:51.421 our_own: 0 00:22:51.421 wide_area: 0 00:22:51.421 multicast: 1 00:22:51.421 cached: 1 00:22:51.679 [2024-07-11 07:15:35.500318] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:51.679 [2024-07-11 07:15:35.500338] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:22:51.679 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:51.679 cookie is 0 00:22:51.679 is_local: 1 00:22:51.679 our_own: 0 00:22:51.679 wide_area: 0 00:22:51.679 multicast: 1 00:22:51.679 cached: 1 00:22:52.614 [2024-07-11 07:15:36.407579] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:52.614 [2024-07-11 07:15:36.407600] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:52.614 [2024-07-11 07:15:36.407616] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:52.614 [2024-07-11 07:15:36.493661] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:22:52.614 [2024-07-11 07:15:36.507474] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:52.614 [2024-07-11 07:15:36.507493] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:52.614 [2024-07-11 07:15:36.507508] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:52.614 [2024-07-11 07:15:36.557242] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:52.615 [2024-07-11 07:15:36.557266] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:52.615 [2024-07-11 07:15:36.594364] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:22:52.615 [2024-07-11 07:15:36.652928] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:52.615 [2024-07-11 07:15:36.652951] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@80 -- # sort 00:22:55.900 07:15:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:55.900 07:15:39 -- common/autotest_common.sh@10 -- # set +x 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@80 -- # xargs 00:22:55.900 07:15:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:55.900 07:15:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.900 07:15:39 -- common/autotest_common.sh@10 -- # set +x 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@76 -- # xargs 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@76 -- # sort 00:22:55.900 07:15:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:55.900 07:15:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.900 07:15:39 -- common/autotest_common.sh@10 -- # set +x 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@64 -- # xargs 00:22:55.900 07:15:39 -- host/mdns_discovery.sh@64 -- # sort 00:22:56.159 07:15:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.159 07:15:39 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:56.159 07:15:39 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:56.159 07:15:39 -- common/autotest_common.sh@640 -- # local es=0 00:22:56.159 07:15:39 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:56.159 07:15:39 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:56.159 07:15:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:56.159 07:15:39 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:56.159 07:15:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:56.159 07:15:39 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:56.159 07:15:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.159 07:15:39 -- common/autotest_common.sh@10 -- # set +x 00:22:56.159 [2024-07-11 07:15:39.995641] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:22:56.159 2024/07/11 07:15:39 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:22:56.159 request: 00:22:56.159 { 00:22:56.159 "method": "bdev_nvme_start_mdns_discovery", 00:22:56.159 "params": { 00:22:56.159 "name": "cdc", 00:22:56.159 "svcname": "_nvme-disc._tcp", 00:22:56.159 "hostnqn": "nqn.2021-12.io.spdk:test" 00:22:56.159 } 00:22:56.159 } 00:22:56.159 Got JSON-RPC error response 00:22:56.159 GoRPCClient: error on JSON-RPC call 00:22:56.159 07:15:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:56.159 07:15:40 -- common/autotest_common.sh@643 -- # es=1 00:22:56.159 07:15:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:56.159 07:15:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:56.159 07:15:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:56.159 07:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.159 07:15:40 -- common/autotest_common.sh@10 -- # set +x 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@76 -- # sort 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@76 -- # xargs 00:22:56.159 07:15:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:56.159 07:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.159 07:15:40 -- common/autotest_common.sh@10 -- # set +x 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@64 -- # sort 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@64 -- # xargs 00:22:56.159 07:15:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:22:56.159 07:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.159 07:15:40 -- common/autotest_common.sh@10 -- # set +x 00:22:56.159 07:15:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@197 -- # kill 86893 00:22:56.159 07:15:40 -- host/mdns_discovery.sh@200 -- # wait 86893 00:22:56.418 [2024-07-11 07:15:40.260296] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:22:56.418 07:15:40 -- host/mdns_discovery.sh@201 -- # kill 86973 00:22:56.418 Got SIGTERM, quitting. 00:22:56.418 07:15:40 -- host/mdns_discovery.sh@202 -- # kill 86922 00:22:56.418 07:15:40 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:22:56.418 07:15:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:56.418 07:15:40 -- nvmf/common.sh@116 -- # sync 00:22:56.418 Got SIGTERM, quitting. 00:22:56.418 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:56.418 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:56.418 avahi-daemon 0.8 exiting. 00:22:56.418 07:15:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:56.418 07:15:40 -- nvmf/common.sh@119 -- # set +e 00:22:56.418 07:15:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:56.418 07:15:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:56.418 rmmod nvme_tcp 00:22:56.418 rmmod nvme_fabrics 00:22:56.418 rmmod nvme_keyring 00:22:56.677 07:15:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:56.677 07:15:40 -- nvmf/common.sh@123 -- # set -e 00:22:56.677 07:15:40 -- nvmf/common.sh@124 -- # return 0 00:22:56.677 07:15:40 -- nvmf/common.sh@477 -- # '[' -n 86843 ']' 00:22:56.677 07:15:40 -- nvmf/common.sh@478 -- # killprocess 86843 00:22:56.677 07:15:40 -- common/autotest_common.sh@926 -- # '[' -z 86843 ']' 00:22:56.677 07:15:40 -- common/autotest_common.sh@930 -- # kill -0 86843 00:22:56.677 07:15:40 -- common/autotest_common.sh@931 -- # uname 00:22:56.677 07:15:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:56.677 07:15:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86843 00:22:56.677 07:15:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:56.677 killing process with pid 86843 00:22:56.677 07:15:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:56.677 07:15:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86843' 00:22:56.677 07:15:40 -- common/autotest_common.sh@945 -- # kill 86843 00:22:56.677 07:15:40 -- common/autotest_common.sh@950 -- # wait 86843 00:22:56.936 07:15:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:56.936 07:15:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:56.936 07:15:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:56.936 07:15:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.936 07:15:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:56.936 07:15:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.936 07:15:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.936 07:15:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.936 07:15:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:56.936 00:22:56.936 real 0m20.592s 00:22:56.936 user 0m40.371s 00:22:56.936 sys 0m1.995s 00:22:56.936 07:15:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:56.936 07:15:40 -- common/autotest_common.sh@10 -- # set +x 00:22:56.936 ************************************ 00:22:56.936 END TEST nvmf_mdns_discovery 00:22:56.936 ************************************ 00:22:56.936 07:15:40 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:22:56.936 07:15:40 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:56.936 07:15:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:56.936 07:15:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:56.936 07:15:40 -- common/autotest_common.sh@10 -- # set +x 00:22:56.936 ************************************ 00:22:56.936 START TEST nvmf_multipath 00:22:56.936 ************************************ 00:22:56.936 07:15:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:56.936 * Looking for test storage... 00:22:56.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:56.936 07:15:40 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:56.936 07:15:40 -- nvmf/common.sh@7 -- # uname -s 00:22:56.936 07:15:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.936 07:15:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.936 07:15:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.936 07:15:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.936 07:15:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.936 07:15:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.936 07:15:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.936 07:15:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.936 07:15:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.936 07:15:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.937 07:15:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:22:56.937 07:15:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:22:56.937 07:15:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.937 07:15:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.937 07:15:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:56.937 07:15:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:56.937 07:15:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.937 07:15:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.937 07:15:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.937 07:15:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.937 07:15:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.937 07:15:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.937 07:15:40 -- paths/export.sh@5 -- # export PATH 00:22:56.937 07:15:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.937 07:15:40 -- nvmf/common.sh@46 -- # : 0 00:22:56.937 07:15:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:56.937 07:15:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:56.937 07:15:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:56.937 07:15:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.937 07:15:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.937 07:15:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:56.937 07:15:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:56.937 07:15:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:56.937 07:15:40 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:56.937 07:15:40 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:56.937 07:15:40 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:56.937 07:15:40 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:56.937 07:15:40 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.937 07:15:40 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:56.937 07:15:40 -- host/multipath.sh@30 -- # nvmftestinit 00:22:56.937 07:15:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:56.937 07:15:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.937 07:15:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:56.937 07:15:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:56.937 07:15:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:56.937 07:15:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.937 07:15:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.937 07:15:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.937 07:15:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:56.937 07:15:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:56.937 07:15:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:56.937 07:15:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:56.937 07:15:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:56.937 07:15:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:56.937 07:15:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.937 07:15:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.937 07:15:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:56.937 07:15:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:56.937 07:15:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:56.937 07:15:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:56.937 07:15:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:56.937 07:15:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.937 07:15:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:56.937 07:15:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:56.937 07:15:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:56.937 07:15:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:56.937 07:15:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:56.937 07:15:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:56.937 Cannot find device "nvmf_tgt_br" 00:22:56.937 07:15:40 -- nvmf/common.sh@154 -- # true 00:22:56.937 07:15:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:56.937 Cannot find device "nvmf_tgt_br2" 00:22:56.937 07:15:40 -- nvmf/common.sh@155 -- # true 00:22:56.937 07:15:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:57.195 07:15:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:57.195 Cannot find device "nvmf_tgt_br" 00:22:57.195 07:15:41 -- nvmf/common.sh@157 -- # true 00:22:57.195 07:15:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:57.195 Cannot find device "nvmf_tgt_br2" 00:22:57.195 07:15:41 -- nvmf/common.sh@158 -- # true 00:22:57.195 07:15:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:57.195 07:15:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:57.195 07:15:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:57.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:57.196 07:15:41 -- nvmf/common.sh@161 -- # true 00:22:57.196 07:15:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:57.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:57.196 07:15:41 -- nvmf/common.sh@162 -- # true 00:22:57.196 07:15:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:57.196 07:15:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:57.196 07:15:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:57.196 07:15:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:57.196 07:15:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:57.196 07:15:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:57.196 07:15:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:57.196 07:15:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:57.196 07:15:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:57.196 07:15:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:57.196 07:15:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:57.196 07:15:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:57.196 07:15:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:57.196 07:15:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:57.196 07:15:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:57.196 07:15:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:57.196 07:15:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:57.196 07:15:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:57.196 07:15:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:57.196 07:15:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:57.196 07:15:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:57.454 07:15:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:57.455 07:15:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:57.455 07:15:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:57.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:22:57.455 00:22:57.455 --- 10.0.0.2 ping statistics --- 00:22:57.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.455 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:22:57.455 07:15:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:57.455 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:57.455 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:22:57.455 00:22:57.455 --- 10.0.0.3 ping statistics --- 00:22:57.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.455 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:57.455 07:15:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:57.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:22:57.455 00:22:57.455 --- 10.0.0.1 ping statistics --- 00:22:57.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.455 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:57.455 07:15:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.455 07:15:41 -- nvmf/common.sh@421 -- # return 0 00:22:57.455 07:15:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:57.455 07:15:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.455 07:15:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:57.455 07:15:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:57.455 07:15:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.455 07:15:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:57.455 07:15:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:57.455 07:15:41 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:22:57.455 07:15:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:57.455 07:15:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:57.455 07:15:41 -- common/autotest_common.sh@10 -- # set +x 00:22:57.455 07:15:41 -- nvmf/common.sh@469 -- # nvmfpid=87484 00:22:57.455 07:15:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:57.455 07:15:41 -- nvmf/common.sh@470 -- # waitforlisten 87484 00:22:57.455 07:15:41 -- common/autotest_common.sh@819 -- # '[' -z 87484 ']' 00:22:57.455 07:15:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.455 07:15:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:57.455 07:15:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.455 07:15:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:57.455 07:15:41 -- common/autotest_common.sh@10 -- # set +x 00:22:57.455 [2024-07-11 07:15:41.363181] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:57.455 [2024-07-11 07:15:41.363259] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.455 [2024-07-11 07:15:41.504828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:57.713 [2024-07-11 07:15:41.615098] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:57.713 [2024-07-11 07:15:41.615291] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.713 [2024-07-11 07:15:41.615310] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.713 [2024-07-11 07:15:41.615321] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.713 [2024-07-11 07:15:41.615499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.713 [2024-07-11 07:15:41.615518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.308 07:15:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:58.308 07:15:42 -- common/autotest_common.sh@852 -- # return 0 00:22:58.308 07:15:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:58.308 07:15:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:58.308 07:15:42 -- common/autotest_common.sh@10 -- # set +x 00:22:58.308 07:15:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.308 07:15:42 -- host/multipath.sh@33 -- # nvmfapp_pid=87484 00:22:58.308 07:15:42 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:58.568 [2024-07-11 07:15:42.555193] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.568 07:15:42 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:58.826 Malloc0 00:22:58.826 07:15:42 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:59.083 07:15:43 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:59.341 07:15:43 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.616 [2024-07-11 07:15:43.427887] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.616 07:15:43 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:59.616 [2024-07-11 07:15:43.608021] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:59.616 07:15:43 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:59.616 07:15:43 -- host/multipath.sh@44 -- # bdevperf_pid=87582 00:22:59.616 07:15:43 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.616 07:15:43 -- host/multipath.sh@47 -- # waitforlisten 87582 /var/tmp/bdevperf.sock 00:22:59.616 07:15:43 -- common/autotest_common.sh@819 -- # '[' -z 87582 ']' 00:22:59.616 07:15:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.616 07:15:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:59.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.616 07:15:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.616 07:15:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:59.616 07:15:43 -- common/autotest_common.sh@10 -- # set +x 00:23:00.552 07:15:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:00.552 07:15:44 -- common/autotest_common.sh@852 -- # return 0 00:23:00.552 07:15:44 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:00.810 07:15:44 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:01.377 Nvme0n1 00:23:01.377 07:15:45 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:01.635 Nvme0n1 00:23:01.635 07:15:45 -- host/multipath.sh@78 -- # sleep 1 00:23:01.635 07:15:45 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:02.569 07:15:46 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:02.569 07:15:46 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:02.827 07:15:46 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:03.086 07:15:46 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:03.086 07:15:46 -- host/multipath.sh@65 -- # dtrace_pid=87669 00:23:03.086 07:15:46 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87484 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:03.086 07:15:46 -- host/multipath.sh@66 -- # sleep 6 00:23:09.680 07:15:52 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:09.680 07:15:52 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:09.680 07:15:53 -- host/multipath.sh@67 -- # active_port=4421 00:23:09.680 07:15:53 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:09.680 Attaching 4 probes... 00:23:09.680 @path[10.0.0.2, 4421]: 21605 00:23:09.680 @path[10.0.0.2, 4421]: 22062 00:23:09.680 @path[10.0.0.2, 4421]: 21752 00:23:09.680 @path[10.0.0.2, 4421]: 22033 00:23:09.680 @path[10.0.0.2, 4421]: 22082 00:23:09.680 07:15:53 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:09.680 07:15:53 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:09.680 07:15:53 -- host/multipath.sh@69 -- # sed -n 1p 00:23:09.680 07:15:53 -- host/multipath.sh@69 -- # port=4421 00:23:09.680 07:15:53 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:09.680 07:15:53 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:09.680 07:15:53 -- host/multipath.sh@72 -- # kill 87669 00:23:09.680 07:15:53 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:09.680 07:15:53 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:09.680 07:15:53 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:09.680 07:15:53 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:09.680 07:15:53 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:09.680 07:15:53 -- host/multipath.sh@65 -- # dtrace_pid=87802 00:23:09.680 07:15:53 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87484 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:09.680 07:15:53 -- host/multipath.sh@66 -- # sleep 6 00:23:16.243 07:15:59 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:16.243 07:15:59 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:16.243 07:15:59 -- host/multipath.sh@67 -- # active_port=4420 00:23:16.243 07:15:59 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:16.243 Attaching 4 probes... 00:23:16.243 @path[10.0.0.2, 4420]: 23113 00:23:16.243 @path[10.0.0.2, 4420]: 23332 00:23:16.243 @path[10.0.0.2, 4420]: 23596 00:23:16.243 @path[10.0.0.2, 4420]: 23610 00:23:16.243 @path[10.0.0.2, 4420]: 23474 00:23:16.243 07:15:59 -- host/multipath.sh@69 -- # sed -n 1p 00:23:16.243 07:15:59 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:16.243 07:15:59 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:16.243 07:15:59 -- host/multipath.sh@69 -- # port=4420 00:23:16.243 07:15:59 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:16.243 07:15:59 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:16.243 07:15:59 -- host/multipath.sh@72 -- # kill 87802 00:23:16.243 07:15:59 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:16.243 07:15:59 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:16.243 07:15:59 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:16.243 07:16:00 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:16.502 07:16:00 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:16.502 07:16:00 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87484 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:16.502 07:16:00 -- host/multipath.sh@65 -- # dtrace_pid=87938 00:23:16.502 07:16:00 -- host/multipath.sh@66 -- # sleep 6 00:23:23.063 07:16:06 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:23.063 07:16:06 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:23.063 07:16:06 -- host/multipath.sh@67 -- # active_port=4421 00:23:23.063 07:16:06 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:23.063 Attaching 4 probes... 00:23:23.063 @path[10.0.0.2, 4421]: 14097 00:23:23.063 @path[10.0.0.2, 4421]: 21618 00:23:23.063 @path[10.0.0.2, 4421]: 21671 00:23:23.063 @path[10.0.0.2, 4421]: 21696 00:23:23.063 @path[10.0.0.2, 4421]: 21816 00:23:23.063 07:16:06 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:23.063 07:16:06 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:23.063 07:16:06 -- host/multipath.sh@69 -- # sed -n 1p 00:23:23.063 07:16:06 -- host/multipath.sh@69 -- # port=4421 00:23:23.063 07:16:06 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:23.063 07:16:06 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:23.063 07:16:06 -- host/multipath.sh@72 -- # kill 87938 00:23:23.063 07:16:06 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:23.063 07:16:06 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:23.063 07:16:06 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:23.063 07:16:06 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:23.321 07:16:07 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:23.321 07:16:07 -- host/multipath.sh@65 -- # dtrace_pid=88067 00:23:23.321 07:16:07 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87484 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:23.321 07:16:07 -- host/multipath.sh@66 -- # sleep 6 00:23:29.881 07:16:13 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:29.881 07:16:13 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:29.881 07:16:13 -- host/multipath.sh@67 -- # active_port= 00:23:29.881 07:16:13 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:29.881 Attaching 4 probes... 00:23:29.881 00:23:29.881 00:23:29.881 00:23:29.881 00:23:29.881 00:23:29.881 07:16:13 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:29.881 07:16:13 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:29.881 07:16:13 -- host/multipath.sh@69 -- # sed -n 1p 00:23:29.881 07:16:13 -- host/multipath.sh@69 -- # port= 00:23:29.881 07:16:13 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:29.881 07:16:13 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:29.881 07:16:13 -- host/multipath.sh@72 -- # kill 88067 00:23:29.881 07:16:13 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:29.881 07:16:13 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:29.881 07:16:13 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:29.882 07:16:13 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:29.882 07:16:13 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:29.882 07:16:13 -- host/multipath.sh@65 -- # dtrace_pid=88199 00:23:29.882 07:16:13 -- host/multipath.sh@66 -- # sleep 6 00:23:29.882 07:16:13 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87484 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:36.441 07:16:19 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:36.441 07:16:19 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:36.441 07:16:20 -- host/multipath.sh@67 -- # active_port=4421 00:23:36.441 07:16:20 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:36.441 Attaching 4 probes... 00:23:36.441 @path[10.0.0.2, 4421]: 21066 00:23:36.441 @path[10.0.0.2, 4421]: 21417 00:23:36.441 @path[10.0.0.2, 4421]: 21439 00:23:36.441 @path[10.0.0.2, 4421]: 21432 00:23:36.441 @path[10.0.0.2, 4421]: 21463 00:23:36.441 07:16:20 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:36.441 07:16:20 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:36.441 07:16:20 -- host/multipath.sh@69 -- # sed -n 1p 00:23:36.441 07:16:20 -- host/multipath.sh@69 -- # port=4421 00:23:36.441 07:16:20 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:36.441 07:16:20 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:36.441 07:16:20 -- host/multipath.sh@72 -- # kill 88199 00:23:36.441 07:16:20 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:36.441 07:16:20 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:36.441 [2024-07-11 07:16:20.377069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.441 [2024-07-11 07:16:20.377713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.442 [2024-07-11 07:16:20.377720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.442 [2024-07-11 07:16:20.377727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.442 [2024-07-11 07:16:20.377735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.442 [2024-07-11 07:16:20.377746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe70d0 is same with the state(5) to be set 00:23:36.442 07:16:20 -- host/multipath.sh@101 -- # sleep 1 00:23:37.378 07:16:21 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:37.378 07:16:21 -- host/multipath.sh@65 -- # dtrace_pid=88329 00:23:37.378 07:16:21 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87484 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:37.378 07:16:21 -- host/multipath.sh@66 -- # sleep 6 00:23:43.941 07:16:27 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:43.941 07:16:27 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:43.941 07:16:27 -- host/multipath.sh@67 -- # active_port=4420 00:23:43.941 07:16:27 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:43.941 Attaching 4 probes... 00:23:43.941 @path[10.0.0.2, 4420]: 22478 00:23:43.941 @path[10.0.0.2, 4420]: 22795 00:23:43.941 @path[10.0.0.2, 4420]: 22841 00:23:43.941 @path[10.0.0.2, 4420]: 22874 00:23:43.941 @path[10.0.0.2, 4420]: 22714 00:23:43.941 07:16:27 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:43.941 07:16:27 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:43.941 07:16:27 -- host/multipath.sh@69 -- # sed -n 1p 00:23:43.941 07:16:27 -- host/multipath.sh@69 -- # port=4420 00:23:43.941 07:16:27 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:43.941 07:16:27 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:43.941 07:16:27 -- host/multipath.sh@72 -- # kill 88329 00:23:43.941 07:16:27 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:43.941 07:16:27 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:43.941 [2024-07-11 07:16:27.924255] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:43.941 07:16:27 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:44.200 07:16:28 -- host/multipath.sh@111 -- # sleep 6 00:23:50.773 07:16:34 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:23:50.773 07:16:34 -- host/multipath.sh@65 -- # dtrace_pid=88527 00:23:50.773 07:16:34 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87484 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:50.773 07:16:34 -- host/multipath.sh@66 -- # sleep 6 00:23:57.410 07:16:40 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:57.410 07:16:40 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:57.410 07:16:40 -- host/multipath.sh@67 -- # active_port=4421 00:23:57.410 07:16:40 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:57.410 Attaching 4 probes... 00:23:57.410 @path[10.0.0.2, 4421]: 20726 00:23:57.410 @path[10.0.0.2, 4421]: 21025 00:23:57.410 @path[10.0.0.2, 4421]: 21181 00:23:57.410 @path[10.0.0.2, 4421]: 21075 00:23:57.410 @path[10.0.0.2, 4421]: 21288 00:23:57.410 07:16:40 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:57.410 07:16:40 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:57.410 07:16:40 -- host/multipath.sh@69 -- # sed -n 1p 00:23:57.410 07:16:40 -- host/multipath.sh@69 -- # port=4421 00:23:57.410 07:16:40 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:57.410 07:16:40 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:57.410 07:16:40 -- host/multipath.sh@72 -- # kill 88527 00:23:57.410 07:16:40 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:57.410 07:16:40 -- host/multipath.sh@114 -- # killprocess 87582 00:23:57.410 07:16:40 -- common/autotest_common.sh@926 -- # '[' -z 87582 ']' 00:23:57.410 07:16:40 -- common/autotest_common.sh@930 -- # kill -0 87582 00:23:57.410 07:16:40 -- common/autotest_common.sh@931 -- # uname 00:23:57.410 07:16:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:57.410 07:16:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87582 00:23:57.410 killing process with pid 87582 00:23:57.410 07:16:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:57.411 07:16:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:57.411 07:16:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87582' 00:23:57.411 07:16:40 -- common/autotest_common.sh@945 -- # kill 87582 00:23:57.411 07:16:40 -- common/autotest_common.sh@950 -- # wait 87582 00:23:57.411 Connection closed with partial response: 00:23:57.411 00:23:57.411 00:23:57.411 07:16:40 -- host/multipath.sh@116 -- # wait 87582 00:23:57.411 07:16:40 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:57.411 [2024-07-11 07:15:43.665273] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:57.411 [2024-07-11 07:15:43.665366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87582 ] 00:23:57.411 [2024-07-11 07:15:43.794359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.411 [2024-07-11 07:15:43.877284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.411 Running I/O for 90 seconds... 00:23:57.411 [2024-07-11 07:15:53.634274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.634380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.634417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.634435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.634473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.634490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.634511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.634526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.634548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.634562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.634583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.634597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.634619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.634633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.634654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.634680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.634715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.634745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.634794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.634822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.634839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.634868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.634887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.634899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.634916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.634927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.635846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.635887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.635909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.635922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.635939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.635950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.635966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.635978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.635994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.636061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.636265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.411 [2024-07-11 07:15:53.636294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.411 [2024-07-11 07:15:53.636696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.411 [2024-07-11 07:15:53.636717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.636731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.636751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.636766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.636800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.636814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.636848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.636877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.636894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.636906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.636924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.636936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.636953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.412 [2024-07-11 07:15:53.636965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.636983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.412 [2024-07-11 07:15:53.636995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.637013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.637025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.637573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.412 [2024-07-11 07:15:53.637599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.637624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.412 [2024-07-11 07:15:53.637650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.637672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.637687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.637708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.412 [2024-07-11 07:15:53.637722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.637743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.637757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.637777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.412 [2024-07-11 07:15:53.637790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.637811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.637855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.637904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.637917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.637934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.637946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.637964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.637976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.637994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.412 [2024-07-11 07:15:53.638387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.412 [2024-07-11 07:15:53.638422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.412 [2024-07-11 07:15:53.638543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.412 [2024-07-11 07:15:53.638622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.412 [2024-07-11 07:15:53.638742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.412 [2024-07-11 07:15:53.638808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.412 [2024-07-11 07:15:53.638854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.412 [2024-07-11 07:15:53.638884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.412 [2024-07-11 07:15:53.638901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.638913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.638931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.638943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.638960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.638972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.638989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.639002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.639032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.639070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.639100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.639130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.639161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.639191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.639221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.639250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.639280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.639310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.639339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.639370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.639399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.639434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.639493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.639538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.639582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.639617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.639651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.639685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.639720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.639740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.639754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.640329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.640364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.640395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.640433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.640513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.640571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.640604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.640636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.640669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.640701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.640740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.640773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.413 [2024-07-11 07:15:53.640805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.640837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.640897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.413 [2024-07-11 07:15:53.640926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.413 [2024-07-11 07:15:53.640954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.640967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.640984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.640996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.641026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.641084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.641113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.641171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.641280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.641310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.641346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.641376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.641636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.641668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.641699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.641866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.641972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.641984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.642002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.642014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.642031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.642043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.642060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.414 [2024-07-11 07:15:53.642072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.642090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.642101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.642119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.642131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.642148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.642166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.414 [2024-07-11 07:15:53.642185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.414 [2024-07-11 07:15:53.642198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.642215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.642227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.642245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.642257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.642274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.642314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.642335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.642349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.642368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.642385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.642405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.642418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.642438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.642451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.642481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.642496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.642515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.642528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.642547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.642560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.642579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.642599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.642620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.642633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.653504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.415 [2024-07-11 07:15:53.653538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.653562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.415 [2024-07-11 07:15:53.653577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.654360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.415 [2024-07-11 07:15:53.654405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.415 [2024-07-11 07:15:53.654440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.654492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.415 [2024-07-11 07:15:53.654527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.654562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.415 [2024-07-11 07:15:53.654598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.654633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.654668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.654735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.654780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.654825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.654871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.654915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.654960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.654987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.655005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.655032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.655050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.655077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.655095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.655121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.655139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.655166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.655185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.655212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.415 [2024-07-11 07:15:53.655230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.655266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.415 [2024-07-11 07:15:53.655285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.415 [2024-07-11 07:15:53.655312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.655332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.655377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.655435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.655499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.655544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.655590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.655635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.655680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.655726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.655770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.655815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.655869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.655915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.655961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.655988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.656006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.656051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.656096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.656141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.656186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.656231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.656276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.656321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.656367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.656420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.656482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.656528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.656574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.656619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.656664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.656709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.656754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.656799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.656844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.656889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.656935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.656962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.656980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.657017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.657036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.657814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.657847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.657880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.416 [2024-07-11 07:15:53.657900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.657929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.657948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.657974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.416 [2024-07-11 07:15:53.657992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.416 [2024-07-11 07:15:53.658019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.658037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.658082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.658127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.658173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.658218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.658262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.658324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.658386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.658432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.658498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.658543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.658589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.658635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.658680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.658732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.658777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.658822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.658867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.658912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.658966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.658994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.659012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.659057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.659103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.659148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.659194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.659239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.659285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.659330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.659376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.659420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.659481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.659536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.659583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.659627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.659674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.659719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.659764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.659830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.659884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.417 [2024-07-11 07:15:53.659929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.659956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.659974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.660001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.417 [2024-07-11 07:15:53.660020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.417 [2024-07-11 07:15:53.660058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.418 [2024-07-11 07:15:53.660231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.418 [2024-07-11 07:15:53.660275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.660958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.660976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.661004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.661021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.661049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.418 [2024-07-11 07:15:53.661067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.661954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.418 [2024-07-11 07:15:53.661987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.418 [2024-07-11 07:15:53.662090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.418 [2024-07-11 07:15:53.662135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.418 [2024-07-11 07:15:53.662244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.418 [2024-07-11 07:15:53.662351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.418 [2024-07-11 07:15:53.662966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.418 [2024-07-11 07:15:53.662993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.419 [2024-07-11 07:15:53.663011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.419 [2024-07-11 07:15:53.663056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.419 [2024-07-11 07:15:53.663101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.419 [2024-07-11 07:15:53.663146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.419 [2024-07-11 07:15:53.663201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.419 [2024-07-11 07:15:53.663247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.419 [2024-07-11 07:15:53.663292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.419 [2024-07-11 07:15:53.663337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.419 [2024-07-11 07:15:53.663388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.419 [2024-07-11 07:15:53.663435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.419 [2024-07-11 07:15:53.663522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.419 [2024-07-11 07:15:53.663567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.419 [2024-07-11 07:15:53.663613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.419 [2024-07-11 07:15:53.663658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.419 [2024-07-11 07:15:53.663708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.419 [2024-07-11 07:15:53.663735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.419 [2024-07-11 07:15:53.663753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.663780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.663798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.663833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.663862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.663899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.663928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.663955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.663973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.664028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.664085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.664131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.664177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.664222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.664267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.664322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.664368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.664412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.664473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.664519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.664564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.664608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.664663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.664708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.664753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.664799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.664844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.664871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.664889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.665703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.665736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.665770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.665790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.665817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.665837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.665864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.665882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.665909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.665928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.665955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.665973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.666000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.666031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.666060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.666078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.666105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.666123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.666151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.666170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.666197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.666216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.666242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.666260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.666301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.666322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.666349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.666368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.666395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.420 [2024-07-11 07:15:53.666412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.666439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.666475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.666504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.666535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.666555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.666569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.420 [2024-07-11 07:15:53.666590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.420 [2024-07-11 07:15:53.666611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.666633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.666647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.666667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.666696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.666716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.666730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.666749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.666792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.666825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.666838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.666856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.666868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.666900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.666913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.666930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.666943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.666960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.666972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.666989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.667001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.667031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.667060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.667098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.667305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.667335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.667364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.667526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.667739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.421 [2024-07-11 07:15:53.667778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.667971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.667983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.668001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.668013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.668031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.668043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.668061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.668073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.421 [2024-07-11 07:15:53.668090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.421 [2024-07-11 07:15:53.668103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.668120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.668133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.668151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.668163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.668181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.668193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.668210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.668223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.668240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.668268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.668286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.668298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.668317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.668340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.422 [2024-07-11 07:15:53.669088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.422 [2024-07-11 07:15:53.669157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.422 [2024-07-11 07:15:53.669240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.422 [2024-07-11 07:15:53.669273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.422 [2024-07-11 07:15:53.669340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.422 [2024-07-11 07:15:53.669408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.669970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.669990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.422 [2024-07-11 07:15:53.670003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.670031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.422 [2024-07-11 07:15:53.670060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.670079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.670092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.670111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.670144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.670163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.422 [2024-07-11 07:15:53.670176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.670195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.670207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.670240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.422 [2024-07-11 07:15:53.670252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.670271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.670309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.670353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.670367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.670388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.422 [2024-07-11 07:15:53.670402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.670423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.422 [2024-07-11 07:15:53.670437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.670457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.422 [2024-07-11 07:15:53.670471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.422 [2024-07-11 07:15:53.670507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.670522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.670562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.670579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.670600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.670615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.670635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.670649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.670670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.670684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.670705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.670733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.670783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.670811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.670829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.670846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.670865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.670878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.670897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.670910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.670928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.670956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.670975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.670988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.671007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.671020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.671039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.671063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.671085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.671098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.671117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.671130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.671149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.671162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.671181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.671194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.671213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.671225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.671244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.671257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.671276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.671289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.671307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.671335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.671353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.671366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.671384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.671402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.672034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.672057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.672081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.672108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.672129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.672142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.672160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.672173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.672191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.672204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.672222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.672234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.672252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.672265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.672283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.672296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.672314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.672326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.672345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.672357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.672375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.672388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.672406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.423 [2024-07-11 07:15:53.672418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.423 [2024-07-11 07:15:53.672436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.423 [2024-07-11 07:15:53.672464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.672508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.672536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.672565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.672580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.672600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.672615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.672635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.672648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.672669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.672682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.672702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.672715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.672735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.672748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.672768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.672781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.672801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.672844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.672877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.672889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.672907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.672920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.672938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.672950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.672968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.672981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.673018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.673049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.673079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.673109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.673140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.673171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.673202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.673232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.673263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.673293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.673323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.673354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.673390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.673422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.673469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.673527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.673548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.673562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.681539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.681588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.681611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.681625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.681645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.681659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.681678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.681692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.681712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.681726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.681745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.681758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.681778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.681791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.681839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.681879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.681898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.681911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.681929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.424 [2024-07-11 07:15:53.681941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.681958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.681971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.424 [2024-07-11 07:15:53.681989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.424 [2024-07-11 07:15:53.682001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.682018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.682030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.682048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.682060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.682077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.682089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.682107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.682119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.682137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.682149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.682166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.682178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.682195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.682208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.682225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.682237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.682261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.682275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.682340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.682355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.682375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.682389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.682408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.682422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.682442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.682457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.425 [2024-07-11 07:15:53.683236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.425 [2024-07-11 07:15:53.683266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.425 [2024-07-11 07:15:53.683326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.425 [2024-07-11 07:15:53.683355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.425 [2024-07-11 07:15:53.683428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.425 [2024-07-11 07:15:53.683522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.683971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.683990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.684002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.684020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.684032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.684049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.425 [2024-07-11 07:15:53.684062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.684079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.425 [2024-07-11 07:15:53.684091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.684109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.684121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.684138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.425 [2024-07-11 07:15:53.684151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.684168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.425 [2024-07-11 07:15:53.684180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.425 [2024-07-11 07:15:53.684198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.684240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.684337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.684367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.684426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.684567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.684632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.684664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.684969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.684987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.684999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.685016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.685028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.685046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.685058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.685075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.685088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.685105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.685117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.685140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.685153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.685171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.685183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.685201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.685213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.685782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.685806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.685844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.685872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.685890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.685903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.685922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.685934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.685952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.685964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.685981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.685994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.686011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.686024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.686041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.686053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.686071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.686083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.686111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.426 [2024-07-11 07:15:53.686124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.686143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.426 [2024-07-11 07:15:53.686156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.426 [2024-07-11 07:15:53.686173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.686186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.686215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.686245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.686275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.686353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.686385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.686418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.686451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.686500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.686535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.686575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.686609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.686641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.686673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.686720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.686752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.686811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.686841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.686870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.686900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.686930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.686960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.686977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.686994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.687025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.687055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.687085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.687115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.687145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.687175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.687204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.687234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.687264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.687293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.687323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.687352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.427 [2024-07-11 07:15:53.687389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.687418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.687448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.687511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.427 [2024-07-11 07:15:53.687554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.427 [2024-07-11 07:15:53.687573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.687585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.687604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.428 [2024-07-11 07:15:53.687617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.687635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.428 [2024-07-11 07:15:53.687648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.687667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.687679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.687698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.687711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.687730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.687743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.687762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.687775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.687803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.687831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.687863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.687875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.687893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.687905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.687923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.687935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.687952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.687964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.687982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.687994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.688011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.688023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.688041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.688059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.688077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.688090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.688107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.688120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.688759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.688798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.688835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.688850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.688869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.428 [2024-07-11 07:15:53.688891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.688910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.428 [2024-07-11 07:15:53.688923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.688941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.688954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.688972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.428 [2024-07-11 07:15:53.688984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.428 [2024-07-11 07:15:53.689014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.689044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.428 [2024-07-11 07:15:53.689074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.689104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.428 [2024-07-11 07:15:53.689134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.689164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.689195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.689225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.689260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.689292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.689322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.689352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.689382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.689411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.689441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.428 [2024-07-11 07:15:53.689516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.428 [2024-07-11 07:15:53.689538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.689551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.689570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.689583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.689602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.689615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.689633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.689646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.689665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.689678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.689704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.689718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.689737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.689750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.689769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.689782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.689801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.689814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.689847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.689874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.689898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.689911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.689929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.689941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.689959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.689971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.689989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.690001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.690062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.690158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.690218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.690248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.690627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.690660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.690693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.690773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.690832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.690850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.690862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.691357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.691378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.691400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.691413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.691431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.691444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.691480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.429 [2024-07-11 07:15:53.691493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.691525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.429 [2024-07-11 07:15:53.691553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.429 [2024-07-11 07:15:53.691575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.691589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.691607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.691621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.691639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.691651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.691670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.691683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.691702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.691714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.691733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.691746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.691764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.691777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.691795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.691822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.691840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.691853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.691871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.691883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.691900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.691913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.691931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.691949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.691968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.691981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.691998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.692137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.692167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.692228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.692257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.692317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.692414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.692444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.692521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.692553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.692795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.692841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.430 [2024-07-11 07:15:53.692886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.430 [2024-07-11 07:15:53.692977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.430 [2024-07-11 07:15:53.692994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.431 [2024-07-11 07:15:53.693007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.693024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.693036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.693054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.693067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.693084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.693097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.693114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.693127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.693145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.693158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.693176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.431 [2024-07-11 07:15:53.693194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.700672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.431 [2024-07-11 07:15:53.700703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.700725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.700738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.700757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.700769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.700787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.700799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.700817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.700829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.700847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.700859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.700876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.700889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.700906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.700918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.700936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.700948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.700965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.700978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.700995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.701007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.701025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.701048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.701069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.701082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.701100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.701112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.701897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.701923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.701958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.701975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.701994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.702008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.431 [2024-07-11 07:15:53.702039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.431 [2024-07-11 07:15:53.702068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.702098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.431 [2024-07-11 07:15:53.702128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.431 [2024-07-11 07:15:53.702158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.702187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.431 [2024-07-11 07:15:53.702217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.702260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.431 [2024-07-11 07:15:53.702320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.702353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.702384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.702415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.702445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.702493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.702525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.702556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.702587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.431 [2024-07-11 07:15:53.702618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.431 [2024-07-11 07:15:53.702636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.702648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.702675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.702703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.702720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.702732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.702750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.702762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.702779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.702791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.702809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.432 [2024-07-11 07:15:53.702821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.702838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.432 [2024-07-11 07:15:53.702850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.702867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.702879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.702897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.702909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.702927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.432 [2024-07-11 07:15:53.702939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.702956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.702968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.702985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.432 [2024-07-11 07:15:53.702998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.432 [2024-07-11 07:15:53.703093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.432 [2024-07-11 07:15:53.703123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.432 [2024-07-11 07:15:53.703182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.432 [2024-07-11 07:15:53.703271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.432 [2024-07-11 07:15:53.703329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.432 [2024-07-11 07:15:53.703359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.432 [2024-07-11 07:15:53.703650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.432 [2024-07-11 07:15:53.703680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.432 [2024-07-11 07:15:53.703710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.432 [2024-07-11 07:15:53.703739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.432 [2024-07-11 07:15:53.703757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:15:53.703769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:15:53.703787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.433 [2024-07-11 07:15:53.703799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:15:53.704587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.433 [2024-07-11 07:15:53.704612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.433 [2024-07-11 07:16:00.094150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.433 [2024-07-11 07:16:00.094548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.433 [2024-07-11 07:16:00.094654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.433 [2024-07-11 07:16:00.094697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.433 [2024-07-11 07:16:00.094897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.094973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.094984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.433 [2024-07-11 07:16:00.095218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.433 [2024-07-11 07:16:00.095246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.433 [2024-07-11 07:16:00.095303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.095528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.433 [2024-07-11 07:16:00.095541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.433 [2024-07-11 07:16:00.096135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.096158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.096193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.096337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.096365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.096565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.096649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.096677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.096734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.096971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.096988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.097000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.097028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.097056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.097085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.097112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.097148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.097176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.097205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.097233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.097261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.097290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.097318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.434 [2024-07-11 07:16:00.097347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.097375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.434 [2024-07-11 07:16:00.097392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.434 [2024-07-11 07:16:00.097404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.097432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.097473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.097509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.097538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.097567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.097595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.097623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.097651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.097680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.097708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.097736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.097764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.097792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.097821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.097855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.097884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.097913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.097941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.097970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.097987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.097999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.098015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.098027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.098044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.098055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.098072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.098083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.098099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.098111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.098128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.098139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.098156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.098167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.098184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.098195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.098218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.098230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.098247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.098259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.098996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.099018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.099039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.099052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.099069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.099081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.099100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.099112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.099129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.099141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.099157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.099169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.099186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.099197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.099214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.099231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.099249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.099261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.099277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.099289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.099314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.099327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.099344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.435 [2024-07-11 07:16:00.099356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.435 [2024-07-11 07:16:00.099372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.435 [2024-07-11 07:16:00.099384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.436 [2024-07-11 07:16:00.099696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.436 [2024-07-11 07:16:00.099758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.436 [2024-07-11 07:16:00.099786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.436 [2024-07-11 07:16:00.099898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.099981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.099998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.100009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.100043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.100072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.100100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.100128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.100155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.100186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.436 [2024-07-11 07:16:00.100214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.436 [2024-07-11 07:16:00.100242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.100270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.436 [2024-07-11 07:16:00.100298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.100326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.100354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.100381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.436 [2024-07-11 07:16:00.100416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.436 [2024-07-11 07:16:00.100433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.100453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.100472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.100485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.100902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.100923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.100944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.100957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.100974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.100986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.101014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.101041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.101070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.101098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.101125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.101277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.101305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.101488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.101619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.101650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.101710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.101982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.101995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.102012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.102025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.102042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.102055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.102073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.102085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.102103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.437 [2024-07-11 07:16:00.102115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.102133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.437 [2024-07-11 07:16:00.102145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.437 [2024-07-11 07:16:00.102163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.102175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.102193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.102206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.102223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.102235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.102252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.102265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.102309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.102325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.102344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.102363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.102382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.102396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.102414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.102431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.102450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.102475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.102496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.111588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.111635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.111652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.111686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.111699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.111717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.111729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.111747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.111760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.111778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.111790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.111809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.111821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.111839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.111851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.111869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.111881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.111912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.111926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.111943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.111955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.111973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.111985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.112045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.112209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.112572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.112586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.113382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.113405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.113429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.113443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.113494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.113509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.113553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.113570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.113590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.438 [2024-07-11 07:16:00.113604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.438 [2024-07-11 07:16:00.113624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.438 [2024-07-11 07:16:00.113649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.113671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.113685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.113705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.439 [2024-07-11 07:16:00.113719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.113739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.439 [2024-07-11 07:16:00.113753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.113773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.113787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.113822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.439 [2024-07-11 07:16:00.113864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.113896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.113908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.113925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.113936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.113954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.439 [2024-07-11 07:16:00.113966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.113982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.113994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.439 [2024-07-11 07:16:00.114263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.439 [2024-07-11 07:16:00.114375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.439 [2024-07-11 07:16:00.114409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.439 [2024-07-11 07:16:00.114574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.114959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.114976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.439 [2024-07-11 07:16:00.114988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.115012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.439 [2024-07-11 07:16:00.115024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.115041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.115053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.115070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.439 [2024-07-11 07:16:00.115082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.115099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.115111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.115129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.115141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.439 [2024-07-11 07:16:00.115158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.439 [2024-07-11 07:16:00.115170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.115187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.115199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.115216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.115228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.115730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.115765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.115805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.115849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.115882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.115910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.115926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.440 [2024-07-11 07:16:00.115937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.115953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.440 [2024-07-11 07:16:00.115973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.115992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.440 [2024-07-11 07:16:00.116003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.440 [2024-07-11 07:16:00.116030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.440 [2024-07-11 07:16:00.116057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.440 [2024-07-11 07:16:00.116084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.440 [2024-07-11 07:16:00.116241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.440 [2024-07-11 07:16:00.116269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.440 [2024-07-11 07:16:00.116465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.440 [2024-07-11 07:16:00.116598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.440 [2024-07-11 07:16:00.116632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.440 [2024-07-11 07:16:00.116698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.440 [2024-07-11 07:16:00.116957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.440 [2024-07-11 07:16:00.116968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.116985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.116997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.117112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.117170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.117264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.117292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.117320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.117414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.117444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.117507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.117589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.117731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.117764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.117873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.117917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.117975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.117992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.118009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.118027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.118038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.118056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.118076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.118095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.118107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.118125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.118136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.118154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.441 [2024-07-11 07:16:00.118166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.118182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.118194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.118211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.118223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.118240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.118252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.118269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.118308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.118345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.118358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.118377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.118390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.441 [2024-07-11 07:16:00.118408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.441 [2024-07-11 07:16:00.118421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.118440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.118453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.442 [2024-07-11 07:16:00.119316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.442 [2024-07-11 07:16:00.119381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.442 [2024-07-11 07:16:00.119487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.442 [2024-07-11 07:16:00.119533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.442 [2024-07-11 07:16:00.119597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.442 [2024-07-11 07:16:00.119698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.119983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.119995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.442 [2024-07-11 07:16:00.120024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.120059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.442 [2024-07-11 07:16:00.120088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.442 [2024-07-11 07:16:00.120117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.120146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.120175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.120204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.442 [2024-07-11 07:16:00.120232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.120261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.120290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.120319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.120347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.120376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.120411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.120441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.120503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.442 [2024-07-11 07:16:00.120523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.442 [2024-07-11 07:16:00.120536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.120554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.120566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.120585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.120597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.120615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.120628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.120646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.120658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.120677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.120689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.120707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.120720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.120738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.120750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.120783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.120796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.120828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.120847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.121292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.121327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.121357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.121387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.121415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.121445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.121515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.121548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.121579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.121641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.121674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.121706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.121750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.121783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.121815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.121847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.121880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.121926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.121958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.121977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.121989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.122034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.122065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.122124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.122153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.122189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.122218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.122247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.443 [2024-07-11 07:16:00.122275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.122335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.122366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.122395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.122425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.443 [2024-07-11 07:16:00.122454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.443 [2024-07-11 07:16:00.122500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.122513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.130868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.130900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.130921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.130934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.130951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.130975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.130994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.444 [2024-07-11 07:16:00.131946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.444 [2024-07-11 07:16:00.131963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.444 [2024-07-11 07:16:00.131975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.131992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.132004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.132021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.132033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.132050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.132062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.132085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.132098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.132115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.132127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.132144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.132156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.132174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.132186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.132933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.132957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.132981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.132995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.445 [2024-07-11 07:16:00.133083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.445 [2024-07-11 07:16:00.133141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.445 [2024-07-11 07:16:00.133241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.445 [2024-07-11 07:16:00.133271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.445 [2024-07-11 07:16:00.133329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.445 [2024-07-11 07:16:00.133415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.445 [2024-07-11 07:16:00.133734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.445 [2024-07-11 07:16:00.133792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.445 [2024-07-11 07:16:00.133821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.445 [2024-07-11 07:16:00.133940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.133973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.445 [2024-07-11 07:16:00.133990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.445 [2024-07-11 07:16:00.134002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.134031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.134060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.134089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.134118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.134147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.134176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.134205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.134234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-07-11 07:16:00.134263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-07-11 07:16:00.134323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.134364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-07-11 07:16:00.134394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.134423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.134453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.134496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.134968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.134989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-07-11 07:16:00.135123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-07-11 07:16:00.135152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-07-11 07:16:00.135192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-07-11 07:16:00.135221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-07-11 07:16:00.135250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-07-11 07:16:00.135279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-07-11 07:16:00.135425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-07-11 07:16:00.135453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-07-11 07:16:00.135668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.446 [2024-07-11 07:16:00.135727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.446 [2024-07-11 07:16:00.135744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.446 [2024-07-11 07:16:00.135756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.135773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.135785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.135802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.135814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.135831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.135843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.135860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.135872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.135889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.135901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.135918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.135936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.135954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.135966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.135984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.135995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.136953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.136970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.447 [2024-07-11 07:16:00.136983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.137000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.137012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.447 [2024-07-11 07:16:00.137029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.447 [2024-07-11 07:16:00.137041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.137064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.137082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.137100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.137112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.137129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-07-11 07:16:00.137142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.137159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.137171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.137188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.137200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.137217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.137229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.137246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.137258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.137276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.137288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.137305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.137317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.137993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-07-11 07:16:00.138185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-07-11 07:16:00.138245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-07-11 07:16:00.138365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-07-11 07:16:00.138394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-07-11 07:16:00.138454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-07-11 07:16:00.138567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.448 [2024-07-11 07:16:00.138847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.448 [2024-07-11 07:16:00.138864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.448 [2024-07-11 07:16:00.138876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.138893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.138905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.138928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.138941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.138958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.138970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.138987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.138998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.139085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.139411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.139440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.139524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.139570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.139582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.140029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.140063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.140104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.140135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.140165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.140194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.140223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.140252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.140280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.140310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.140339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.140368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.140397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.140431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.140485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.140516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.140546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.449 [2024-07-11 07:16:00.140575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.449 [2024-07-11 07:16:00.140592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.449 [2024-07-11 07:16:00.140604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.140621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.140632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.140649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.140661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.140679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.140696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.140714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.140726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.140743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-07-11 07:16:00.140755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.140772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.140784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.140801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.140813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.140831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-07-11 07:16:00.140843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-07-11 07:16:00.148174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-07-11 07:16:00.148240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-07-11 07:16:00.148622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-07-11 07:16:00.148680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.450 [2024-07-11 07:16:00.148738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.450 [2024-07-11 07:16:00.148766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.450 [2024-07-11 07:16:00.148784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.148796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.148813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.148840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.148857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.148869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.148888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.148899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.148917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.148937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.148956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.148969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.148987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.148999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.149060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.149179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.149210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.149284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.149318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.149407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.149567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.149721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.149983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.150007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.150046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.150062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.150085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.150098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.150121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.150133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.150155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.150167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.150189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.150201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.150224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.150236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.150258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.150270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.150321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.150336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.150359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.150373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.150396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.451 [2024-07-11 07:16:00.150408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.150442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.451 [2024-07-11 07:16:00.150456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.451 [2024-07-11 07:16:00.150493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-07-11 07:16:00.150506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.150529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.150542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.150564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-07-11 07:16:00.150577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.150599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.150612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.150634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.150647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.150670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-07-11 07:16:00.150682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.150719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.150731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.150753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.150765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.150788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.150800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.150822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.150834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.150856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.150868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.150897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.150911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.150933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.150946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.150968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.150980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-07-11 07:16:00.151049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-07-11 07:16:00.151117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-07-11 07:16:00.151151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-07-11 07:16:00.151287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-07-11 07:16:00.151689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-07-11 07:16:00.151723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-07-11 07:16:00.151799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:00.151951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:00.151969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:07.117903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-07-11 07:16:07.117959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:07.118027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.452 [2024-07-11 07:16:07.118061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:07.118081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.452 [2024-07-11 07:16:07.118094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:57.452 [2024-07-11 07:16:07.118112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-07-11 07:16:07.118501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-07-11 07:16:07.118546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-07-11 07:16:07.118582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-07-11 07:16:07.118688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-07-11 07:16:07.118802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-07-11 07:16:07.118847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-07-11 07:16:07.118948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.118980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.118998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-07-11 07:16:07.119011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.119029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-07-11 07:16:07.119041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.119059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.119072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.119089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-07-11 07:16:07.119101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.119119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.119132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.119149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.453 [2024-07-11 07:16:07.119161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.119194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.119206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.119225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.119238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.119257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.453 [2024-07-11 07:16:07.119292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:57.453 [2024-07-11 07:16:07.119312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.119325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.119345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.119358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.119376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.119388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.119407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.119436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.119708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.119733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.119762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.119778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.119803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.119848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.119885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.119898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.119934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.119962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.119984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.119997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.120411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.120464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.120518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.120557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.120623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.120671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.120708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.120963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.120989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.121003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.121023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.121035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.121056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.121068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.121089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.121102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.121128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.121142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.121163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.121176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.121196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.121209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.121230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.121242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.121262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.454 [2024-07-11 07:16:07.121275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.121296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.454 [2024-07-11 07:16:07.121308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:57.454 [2024-07-11 07:16:07.121329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.121342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.121390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.455 [2024-07-11 07:16:07.121440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.121509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.455 [2024-07-11 07:16:07.121556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.121597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.121648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.455 [2024-07-11 07:16:07.121689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.455 [2024-07-11 07:16:07.121728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.121766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.121804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.121871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.121920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.121970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.121992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.455 [2024-07-11 07:16:07.122620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.455 [2024-07-11 07:16:07.122657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.455 [2024-07-11 07:16:07.122759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.455 [2024-07-11 07:16:07.122880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.122961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.122982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.455 [2024-07-11 07:16:07.122995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.123020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.455 [2024-07-11 07:16:07.123033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.123055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.455 [2024-07-11 07:16:07.123067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.123088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.455 [2024-07-11 07:16:07.123100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.123367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.455 [2024-07-11 07:16:07.123391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:57.455 [2024-07-11 07:16:07.123422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:07.123436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:07.123507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.456 [2024-07-11 07:16:07.123523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:07.123566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:07.123584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:07.123615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:07.123629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:07.123659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:07.123673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:07.123703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:07.123718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:07.123761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:07.123775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:07.123818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.456 [2024-07-11 07:16:07.123846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:07.123902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.456 [2024-07-11 07:16:07.123915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:07.123956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:07.123970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.378983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.378996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.379008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.379021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.379033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.379047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.379064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.379079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.379091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.379104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.379115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.379128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.379141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.379154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.456 [2024-07-11 07:16:20.379166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.456 [2024-07-11 07:16:20.379179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.457 [2024-07-11 07:16:20.379600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.457 [2024-07-11 07:16:20.379628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.457 [2024-07-11 07:16:20.379683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.457 [2024-07-11 07:16:20.379854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.457 [2024-07-11 07:16:20.379932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.379956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.457 [2024-07-11 07:16:20.379980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.379993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.457 [2024-07-11 07:16:20.380004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.380016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.457 [2024-07-11 07:16:20.380028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.380040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.380057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.380070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.380081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.380094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.457 [2024-07-11 07:16:20.380105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.380118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.380134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.380147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.457 [2024-07-11 07:16:20.380159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.380172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.457 [2024-07-11 07:16:20.380183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.380195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.457 [2024-07-11 07:16:20.380207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.457 [2024-07-11 07:16:20.380219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.380254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.380278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.380331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.380378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.380530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.380557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.380638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.380980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.380993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.381017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.381041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.381064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.381088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.381112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.381136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.381160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.381183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.381212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.381241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.381266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.381290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.381313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.381336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.458 [2024-07-11 07:16:20.381360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.458 [2024-07-11 07:16:20.381384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.458 [2024-07-11 07:16:20.381397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.459 [2024-07-11 07:16:20.381874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2abd0 is same with the state(5) to be set 00:23:57.459 [2024-07-11 07:16:20.381901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.459 [2024-07-11 07:16:20.381910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.459 [2024-07-11 07:16:20.381919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24968 len:8 PRP1 0x0 PRP2 0x0 00:23:57.459 [2024-07-11 07:16:20.381930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.381990] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b2abd0 was disconnected and freed. reset controller. 00:23:57.459 [2024-07-11 07:16:20.382094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.459 [2024-07-11 07:16:20.382116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.382130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.459 [2024-07-11 07:16:20.382141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.382153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.459 [2024-07-11 07:16:20.382165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.382177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.459 [2024-07-11 07:16:20.382188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.459 [2024-07-11 07:16:20.382199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc56e0 is same with the state(5) to be set 00:23:57.459 [2024-07-11 07:16:20.383420] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.459 [2024-07-11 07:16:20.383467] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc56e0 (9): Bad file descriptor 00:23:57.459 [2024-07-11 07:16:20.383615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.459 [2024-07-11 07:16:20.383673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.459 [2024-07-11 07:16:20.383695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc56e0 with addr=10.0.0.2, port=4421 00:23:57.459 [2024-07-11 07:16:20.383710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc56e0 is same with the state(5) to be set 00:23:57.459 [2024-07-11 07:16:20.383734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc56e0 (9): Bad file descriptor 00:23:57.459 [2024-07-11 07:16:20.383756] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.459 [2024-07-11 07:16:20.383770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.459 [2024-07-11 07:16:20.383784] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.459 [2024-07-11 07:16:20.383808] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.459 [2024-07-11 07:16:20.383837] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.459 [2024-07-11 07:16:30.433648] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:57.459 Received shutdown signal, test time was about 54.861026 seconds 00:23:57.459 00:23:57.459 Latency(us) 00:23:57.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.459 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:57.459 Verification LBA range: start 0x0 length 0x4000 00:23:57.459 Nvme0n1 : 54.86 12571.60 49.11 0.00 0.00 10166.00 893.67 7015926.69 00:23:57.459 =================================================================================================================== 00:23:57.459 Total : 12571.60 49.11 0.00 0.00 10166.00 893.67 7015926.69 00:23:57.459 07:16:40 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.459 07:16:41 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:23:57.459 07:16:41 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:57.459 07:16:41 -- host/multipath.sh@125 -- # nvmftestfini 00:23:57.459 07:16:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:57.459 07:16:41 -- nvmf/common.sh@116 -- # sync 00:23:57.459 07:16:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:57.459 07:16:41 -- nvmf/common.sh@119 -- # set +e 00:23:57.459 07:16:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:57.459 07:16:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:57.459 rmmod nvme_tcp 00:23:57.459 rmmod nvme_fabrics 00:23:57.459 rmmod nvme_keyring 00:23:57.459 07:16:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:57.459 07:16:41 -- nvmf/common.sh@123 -- # set -e 00:23:57.459 07:16:41 -- nvmf/common.sh@124 -- # return 0 00:23:57.459 07:16:41 -- nvmf/common.sh@477 -- # '[' -n 87484 ']' 00:23:57.459 07:16:41 -- nvmf/common.sh@478 -- # killprocess 87484 00:23:57.459 07:16:41 -- common/autotest_common.sh@926 -- # '[' -z 87484 ']' 00:23:57.459 07:16:41 -- common/autotest_common.sh@930 -- # kill -0 87484 00:23:57.459 07:16:41 -- common/autotest_common.sh@931 -- # uname 00:23:57.459 07:16:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:57.459 07:16:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87484 00:23:57.459 killing process with pid 87484 00:23:57.459 07:16:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:57.459 07:16:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:57.459 07:16:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87484' 00:23:57.459 07:16:41 -- common/autotest_common.sh@945 -- # kill 87484 00:23:57.459 07:16:41 -- common/autotest_common.sh@950 -- # wait 87484 00:23:57.718 07:16:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:57.718 07:16:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:57.718 07:16:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:57.719 07:16:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.719 07:16:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:57.719 07:16:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.719 07:16:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.719 07:16:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.719 07:16:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:57.719 00:23:57.719 real 1m0.691s 00:23:57.719 user 2m47.824s 00:23:57.719 sys 0m15.789s 00:23:57.719 07:16:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:57.719 07:16:41 -- common/autotest_common.sh@10 -- # set +x 00:23:57.719 ************************************ 00:23:57.719 END TEST nvmf_multipath 00:23:57.719 ************************************ 00:23:57.719 07:16:41 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:57.719 07:16:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:57.719 07:16:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:57.719 07:16:41 -- common/autotest_common.sh@10 -- # set +x 00:23:57.719 ************************************ 00:23:57.719 START TEST nvmf_timeout 00:23:57.719 ************************************ 00:23:57.719 07:16:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:57.719 * Looking for test storage... 00:23:57.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:57.719 07:16:41 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:57.719 07:16:41 -- nvmf/common.sh@7 -- # uname -s 00:23:57.719 07:16:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.719 07:16:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.719 07:16:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.719 07:16:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.719 07:16:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.719 07:16:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.719 07:16:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.719 07:16:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.719 07:16:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.719 07:16:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.719 07:16:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:23:57.719 07:16:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:23:57.719 07:16:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.719 07:16:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.719 07:16:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:57.719 07:16:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:57.719 07:16:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.719 07:16:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.719 07:16:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.719 07:16:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.719 07:16:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.719 07:16:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.719 07:16:41 -- paths/export.sh@5 -- # export PATH 00:23:57.719 07:16:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.719 07:16:41 -- nvmf/common.sh@46 -- # : 0 00:23:57.719 07:16:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:57.719 07:16:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:57.719 07:16:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:57.719 07:16:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.719 07:16:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.719 07:16:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:57.719 07:16:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:57.719 07:16:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:57.719 07:16:41 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:57.719 07:16:41 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:57.719 07:16:41 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:57.719 07:16:41 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:57.719 07:16:41 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:57.719 07:16:41 -- host/timeout.sh@19 -- # nvmftestinit 00:23:57.719 07:16:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:57.719 07:16:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.719 07:16:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:57.719 07:16:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:57.719 07:16:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:57.719 07:16:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.719 07:16:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.719 07:16:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.719 07:16:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:57.719 07:16:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:57.719 07:16:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:57.719 07:16:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:57.719 07:16:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:57.719 07:16:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:57.719 07:16:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.719 07:16:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.719 07:16:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:57.719 07:16:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:57.719 07:16:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:57.719 07:16:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:57.719 07:16:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:57.719 07:16:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.719 07:16:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:57.719 07:16:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:57.719 07:16:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:57.719 07:16:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:57.719 07:16:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:57.719 07:16:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:57.719 Cannot find device "nvmf_tgt_br" 00:23:57.719 07:16:41 -- nvmf/common.sh@154 -- # true 00:23:57.719 07:16:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:57.719 Cannot find device "nvmf_tgt_br2" 00:23:57.719 07:16:41 -- nvmf/common.sh@155 -- # true 00:23:57.719 07:16:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:57.719 07:16:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:57.719 Cannot find device "nvmf_tgt_br" 00:23:57.719 07:16:41 -- nvmf/common.sh@157 -- # true 00:23:57.719 07:16:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:57.719 Cannot find device "nvmf_tgt_br2" 00:23:57.719 07:16:41 -- nvmf/common.sh@158 -- # true 00:23:57.719 07:16:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:57.978 07:16:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:57.978 07:16:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:57.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:57.978 07:16:41 -- nvmf/common.sh@161 -- # true 00:23:57.978 07:16:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:57.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:57.978 07:16:41 -- nvmf/common.sh@162 -- # true 00:23:57.978 07:16:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:57.978 07:16:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:57.978 07:16:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:57.978 07:16:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:57.978 07:16:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:57.978 07:16:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:57.978 07:16:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:57.978 07:16:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:57.978 07:16:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:57.978 07:16:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:57.978 07:16:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:57.978 07:16:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:57.978 07:16:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:57.978 07:16:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:57.978 07:16:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:57.978 07:16:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:57.978 07:16:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:57.978 07:16:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:57.978 07:16:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:57.978 07:16:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:57.978 07:16:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:57.978 07:16:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:57.978 07:16:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:57.978 07:16:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:57.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:23:57.978 00:23:57.978 --- 10.0.0.2 ping statistics --- 00:23:57.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.978 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:57.978 07:16:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:57.978 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:57.978 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:23:57.978 00:23:57.978 --- 10.0.0.3 ping statistics --- 00:23:57.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.978 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:23:57.978 07:16:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:57.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:23:57.978 00:23:57.978 --- 10.0.0.1 ping statistics --- 00:23:57.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.978 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:23:57.978 07:16:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.978 07:16:42 -- nvmf/common.sh@421 -- # return 0 00:23:57.978 07:16:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:57.978 07:16:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.978 07:16:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:57.978 07:16:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:57.978 07:16:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.978 07:16:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:57.978 07:16:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:57.978 07:16:42 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:23:57.978 07:16:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:57.978 07:16:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:57.978 07:16:42 -- common/autotest_common.sh@10 -- # set +x 00:23:58.237 07:16:42 -- nvmf/common.sh@469 -- # nvmfpid=88852 00:23:58.237 07:16:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:58.237 07:16:42 -- nvmf/common.sh@470 -- # waitforlisten 88852 00:23:58.237 07:16:42 -- common/autotest_common.sh@819 -- # '[' -z 88852 ']' 00:23:58.237 07:16:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.237 07:16:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:58.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.237 07:16:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.237 07:16:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:58.237 07:16:42 -- common/autotest_common.sh@10 -- # set +x 00:23:58.237 [2024-07-11 07:16:42.082604] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:58.237 [2024-07-11 07:16:42.082665] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.237 [2024-07-11 07:16:42.215113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:58.496 [2024-07-11 07:16:42.302736] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:58.496 [2024-07-11 07:16:42.302884] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.496 [2024-07-11 07:16:42.302896] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.496 [2024-07-11 07:16:42.302904] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.496 [2024-07-11 07:16:42.303070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.496 [2024-07-11 07:16:42.303083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.063 07:16:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:59.063 07:16:43 -- common/autotest_common.sh@852 -- # return 0 00:23:59.063 07:16:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:59.063 07:16:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:59.063 07:16:43 -- common/autotest_common.sh@10 -- # set +x 00:23:59.063 07:16:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.063 07:16:43 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:59.063 07:16:43 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:59.321 [2024-07-11 07:16:43.294923] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.321 07:16:43 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:59.580 Malloc0 00:23:59.580 07:16:43 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:59.837 07:16:43 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:00.094 07:16:43 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.352 [2024-07-11 07:16:44.161342] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.352 07:16:44 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:00.352 07:16:44 -- host/timeout.sh@32 -- # bdevperf_pid=88939 00:24:00.352 07:16:44 -- host/timeout.sh@34 -- # waitforlisten 88939 /var/tmp/bdevperf.sock 00:24:00.352 07:16:44 -- common/autotest_common.sh@819 -- # '[' -z 88939 ']' 00:24:00.352 07:16:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.352 07:16:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:00.352 07:16:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.352 07:16:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:00.352 07:16:44 -- common/autotest_common.sh@10 -- # set +x 00:24:00.352 [2024-07-11 07:16:44.216642] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:00.352 [2024-07-11 07:16:44.216719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88939 ] 00:24:00.352 [2024-07-11 07:16:44.352659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.610 [2024-07-11 07:16:44.462367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.177 07:16:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:01.177 07:16:45 -- common/autotest_common.sh@852 -- # return 0 00:24:01.178 07:16:45 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:01.436 07:16:45 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:01.695 NVMe0n1 00:24:01.695 07:16:45 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:01.695 07:16:45 -- host/timeout.sh@51 -- # rpc_pid=88981 00:24:01.695 07:16:45 -- host/timeout.sh@53 -- # sleep 1 00:24:01.695 Running I/O for 10 seconds... 00:24:02.631 07:16:46 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.893 [2024-07-11 07:16:46.803545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.803965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1866e20 is same with the state(5) to be set 00:24:02.893 [2024-07-11 07:16:46.804344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.894 [2024-07-11 07:16:46.804557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.894 [2024-07-11 07:16:46.804575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.894 [2024-07-11 07:16:46.804593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.894 [2024-07-11 07:16:46.804631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.894 [2024-07-11 07:16:46.804650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.894 [2024-07-11 07:16:46.804669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.804983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.804994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.805002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.805012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.805021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.805031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.805040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.805051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.805059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.805070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.805078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.805088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.805097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.805108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.894 [2024-07-11 07:16:46.805116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.805126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.894 [2024-07-11 07:16:46.805134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.805144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.894 [2024-07-11 07:16:46.805153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.805164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.894 [2024-07-11 07:16:46.805174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.805185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.894 [2024-07-11 07:16:46.805195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.805205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.894 [2024-07-11 07:16:46.805213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.805224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.894 [2024-07-11 07:16:46.805233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.894 [2024-07-11 07:16:46.805244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.805271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.805491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.805514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.805552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.805571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.805591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.805649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.805668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.805704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.805723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.805760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.805952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.805970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.805989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.805999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.806008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.806018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.895 [2024-07-11 07:16:46.806027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.806037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.806056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.806066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.806074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.895 [2024-07-11 07:16:46.806084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.895 [2024-07-11 07:16:46.806093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.896 [2024-07-11 07:16:46.806268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.896 [2024-07-11 07:16:46.806334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.896 [2024-07-11 07:16:46.806362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.896 [2024-07-11 07:16:46.806426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.896 [2024-07-11 07:16:46.806446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.896 [2024-07-11 07:16:46.806508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.896 [2024-07-11 07:16:46.806724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.896 [2024-07-11 07:16:46.806768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:02.896 [2024-07-11 07:16:46.806916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.806989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.806997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.807007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.807016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.807025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.807034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.807044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.807052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.807061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.896 [2024-07-11 07:16:46.807070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.896 [2024-07-11 07:16:46.807079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121a420 is same with the state(5) to be set 00:24:02.897 [2024-07-11 07:16:46.807096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:02.897 [2024-07-11 07:16:46.807105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:02.897 [2024-07-11 07:16:46.807113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125792 len:8 PRP1 0x0 PRP2 0x0 00:24:02.897 [2024-07-11 07:16:46.807121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.897 [2024-07-11 07:16:46.807171] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x121a420 was disconnected and freed. reset controller. 00:24:02.897 [2024-07-11 07:16:46.807370] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.897 [2024-07-11 07:16:46.807470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d3170 (9): Bad file descriptor 00:24:02.897 [2024-07-11 07:16:46.807556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.897 [2024-07-11 07:16:46.807616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.897 [2024-07-11 07:16:46.807636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d3170 with addr=10.0.0.2, port=4420 00:24:02.897 [2024-07-11 07:16:46.807648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d3170 is same with the state(5) to be set 00:24:02.897 [2024-07-11 07:16:46.807666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d3170 (9): Bad file descriptor 00:24:02.897 [2024-07-11 07:16:46.807683] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.897 [2024-07-11 07:16:46.807694] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.897 [2024-07-11 07:16:46.807703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.897 [2024-07-11 07:16:46.807730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.897 [2024-07-11 07:16:46.807741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.897 07:16:46 -- host/timeout.sh@56 -- # sleep 2 00:24:04.799 [2024-07-11 07:16:48.807806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.799 [2024-07-11 07:16:48.807879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.799 [2024-07-11 07:16:48.807899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d3170 with addr=10.0.0.2, port=4420 00:24:04.799 [2024-07-11 07:16:48.807910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d3170 is same with the state(5) to be set 00:24:04.799 [2024-07-11 07:16:48.807940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d3170 (9): Bad file descriptor 00:24:04.799 [2024-07-11 07:16:48.807961] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:04.799 [2024-07-11 07:16:48.807971] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:04.799 [2024-07-11 07:16:48.807979] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:04.799 [2024-07-11 07:16:48.807998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:04.799 [2024-07-11 07:16:48.808010] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:04.799 07:16:48 -- host/timeout.sh@57 -- # get_controller 00:24:04.799 07:16:48 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.799 07:16:48 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:05.056 07:16:49 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:05.056 07:16:49 -- host/timeout.sh@58 -- # get_bdev 00:24:05.056 07:16:49 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:05.056 07:16:49 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:05.314 07:16:49 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:05.314 07:16:49 -- host/timeout.sh@61 -- # sleep 5 00:24:07.217 [2024-07-11 07:16:50.808136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.217 [2024-07-11 07:16:50.808248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.217 [2024-07-11 07:16:50.808269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d3170 with addr=10.0.0.2, port=4420 00:24:07.217 [2024-07-11 07:16:50.808282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d3170 is same with the state(5) to be set 00:24:07.217 [2024-07-11 07:16:50.808307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d3170 (9): Bad file descriptor 00:24:07.217 [2024-07-11 07:16:50.808339] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.217 [2024-07-11 07:16:50.808352] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.217 [2024-07-11 07:16:50.808363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.217 [2024-07-11 07:16:50.808390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.217 [2024-07-11 07:16:50.808403] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.117 [2024-07-11 07:16:52.808424] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.117 [2024-07-11 07:16:52.808466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.117 [2024-07-11 07:16:52.808485] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.117 [2024-07-11 07:16:52.808494] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:09.117 [2024-07-11 07:16:52.808518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.054 00:24:10.054 Latency(us) 00:24:10.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.054 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:10.054 Verification LBA range: start 0x0 length 0x4000 00:24:10.054 NVMe0n1 : 8.11 1932.01 7.55 15.78 0.00 65633.34 2666.12 7015926.69 00:24:10.054 =================================================================================================================== 00:24:10.054 Total : 1932.01 7.55 15.78 0.00 65633.34 2666.12 7015926.69 00:24:10.054 0 00:24:10.313 07:16:54 -- host/timeout.sh@62 -- # get_controller 00:24:10.313 07:16:54 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:10.313 07:16:54 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:10.571 07:16:54 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:10.571 07:16:54 -- host/timeout.sh@63 -- # get_bdev 00:24:10.571 07:16:54 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:10.571 07:16:54 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:10.830 07:16:54 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:10.830 07:16:54 -- host/timeout.sh@65 -- # wait 88981 00:24:10.830 07:16:54 -- host/timeout.sh@67 -- # killprocess 88939 00:24:10.830 07:16:54 -- common/autotest_common.sh@926 -- # '[' -z 88939 ']' 00:24:10.830 07:16:54 -- common/autotest_common.sh@930 -- # kill -0 88939 00:24:10.830 07:16:54 -- common/autotest_common.sh@931 -- # uname 00:24:10.830 07:16:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:10.830 07:16:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88939 00:24:10.830 killing process with pid 88939 00:24:10.830 Received shutdown signal, test time was about 9.135655 seconds 00:24:10.830 00:24:10.830 Latency(us) 00:24:10.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.830 =================================================================================================================== 00:24:10.830 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.830 07:16:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:10.830 07:16:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:10.830 07:16:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88939' 00:24:10.830 07:16:54 -- common/autotest_common.sh@945 -- # kill 88939 00:24:10.830 07:16:54 -- common/autotest_common.sh@950 -- # wait 88939 00:24:11.089 07:16:55 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.348 [2024-07-11 07:16:55.308650] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.348 07:16:55 -- host/timeout.sh@74 -- # bdevperf_pid=89139 00:24:11.348 07:16:55 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:11.348 07:16:55 -- host/timeout.sh@76 -- # waitforlisten 89139 /var/tmp/bdevperf.sock 00:24:11.348 07:16:55 -- common/autotest_common.sh@819 -- # '[' -z 89139 ']' 00:24:11.348 07:16:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.348 07:16:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:11.348 07:16:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.348 07:16:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:11.348 07:16:55 -- common/autotest_common.sh@10 -- # set +x 00:24:11.348 [2024-07-11 07:16:55.363682] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:11.348 [2024-07-11 07:16:55.363777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89139 ] 00:24:11.608 [2024-07-11 07:16:55.494659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.608 [2024-07-11 07:16:55.570322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.545 07:16:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:12.545 07:16:56 -- common/autotest_common.sh@852 -- # return 0 00:24:12.545 07:16:56 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:12.545 07:16:56 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:12.812 NVMe0n1 00:24:12.812 07:16:56 -- host/timeout.sh@84 -- # rpc_pid=89181 00:24:12.812 07:16:56 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:12.812 07:16:56 -- host/timeout.sh@86 -- # sleep 1 00:24:13.127 Running I/O for 10 seconds... 00:24:14.074 07:16:57 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.074 [2024-07-11 07:16:57.934258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.934688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a57690 is same with the state(5) to be set 00:24:14.074 [2024-07-11 07:16:57.935011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.074 [2024-07-11 07:16:57.935055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.074 [2024-07-11 07:16:57.935077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.074 [2024-07-11 07:16:57.935087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.074 [2024-07-11 07:16:57.935099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.074 [2024-07-11 07:16:57.935108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.074 [2024-07-11 07:16:57.935119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.074 [2024-07-11 07:16:57.935128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.074 [2024-07-11 07:16:57.935138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.074 [2024-07-11 07:16:57.935147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.074 [2024-07-11 07:16:57.935158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.075 [2024-07-11 07:16:57.935639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.075 [2024-07-11 07:16:57.935659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.075 [2024-07-11 07:16:57.935679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.075 [2024-07-11 07:16:57.935698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.935941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.075 [2024-07-11 07:16:57.935959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.075 [2024-07-11 07:16:57.935978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.935988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.075 [2024-07-11 07:16:57.935997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.936007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.075 [2024-07-11 07:16:57.936016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.936026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.075 [2024-07-11 07:16:57.936034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.936045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.936053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.075 [2024-07-11 07:16:57.936065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.075 [2024-07-11 07:16:57.936081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.076 [2024-07-11 07:16:57.936963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.076 [2024-07-11 07:16:57.936974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.076 [2024-07-11 07:16:57.936983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.936993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.077 [2024-07-11 07:16:57.937022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.077 [2024-07-11 07:16:57.937041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.077 [2024-07-11 07:16:57.937124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.077 [2024-07-11 07:16:57.937162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.077 [2024-07-11 07:16:57.937358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.077 [2024-07-11 07:16:57.937377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.077 [2024-07-11 07:16:57.937465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.077 [2024-07-11 07:16:57.937489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.077 [2024-07-11 07:16:57.937509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.077 [2024-07-11 07:16:57.937528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.077 [2024-07-11 07:16:57.937546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.077 [2024-07-11 07:16:57.937565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.077 [2024-07-11 07:16:57.937725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2e420 is same with the state(5) to be set 00:24:14.077 [2024-07-11 07:16:57.937754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.077 [2024-07-11 07:16:57.937763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.077 [2024-07-11 07:16:57.937771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129096 len:8 PRP1 0x0 PRP2 0x0 00:24:14.077 [2024-07-11 07:16:57.937780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.077 [2024-07-11 07:16:57.937855] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e2e420 was disconnected and freed. reset controller. 00:24:14.077 [2024-07-11 07:16:57.938072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:14.077 [2024-07-11 07:16:57.938151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de7170 (9): Bad file descriptor 00:24:14.077 [2024-07-11 07:16:57.938266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.077 [2024-07-11 07:16:57.938337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.077 [2024-07-11 07:16:57.938357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de7170 with addr=10.0.0.2, port=4420 00:24:14.077 [2024-07-11 07:16:57.938367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de7170 is same with the state(5) to be set 00:24:14.077 [2024-07-11 07:16:57.938386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de7170 (9): Bad file descriptor 00:24:14.077 [2024-07-11 07:16:57.938404] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:14.077 [2024-07-11 07:16:57.938414] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:14.078 [2024-07-11 07:16:57.938425] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:14.078 [2024-07-11 07:16:57.938472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.078 [2024-07-11 07:16:57.938495] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:14.078 07:16:57 -- host/timeout.sh@90 -- # sleep 1 00:24:15.013 [2024-07-11 07:16:58.938587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.013 [2024-07-11 07:16:58.938684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.013 [2024-07-11 07:16:58.938706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de7170 with addr=10.0.0.2, port=4420 00:24:15.013 [2024-07-11 07:16:58.938719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de7170 is same with the state(5) to be set 00:24:15.013 [2024-07-11 07:16:58.938739] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de7170 (9): Bad file descriptor 00:24:15.013 [2024-07-11 07:16:58.938758] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:15.013 [2024-07-11 07:16:58.938769] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:15.013 [2024-07-11 07:16:58.938779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.013 [2024-07-11 07:16:58.938797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.013 [2024-07-11 07:16:58.938810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:15.013 07:16:58 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:15.271 [2024-07-11 07:16:59.185119] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.271 07:16:59 -- host/timeout.sh@92 -- # wait 89181 00:24:16.205 [2024-07-11 07:16:59.956175] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:24.331 00:24:24.331 Latency(us) 00:24:24.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.331 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:24.331 Verification LBA range: start 0x0 length 0x4000 00:24:24.331 NVMe0n1 : 10.00 10883.29 42.51 0.00 0.00 11744.58 1079.85 3019898.88 00:24:24.331 =================================================================================================================== 00:24:24.331 Total : 10883.29 42.51 0.00 0.00 11744.58 1079.85 3019898.88 00:24:24.331 0 00:24:24.331 07:17:06 -- host/timeout.sh@97 -- # rpc_pid=89304 00:24:24.331 07:17:06 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:24.331 07:17:06 -- host/timeout.sh@98 -- # sleep 1 00:24:24.331 Running I/O for 10 seconds... 00:24:24.331 07:17:07 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.331 [2024-07-11 07:17:08.137937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.331 [2024-07-11 07:17:08.138433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.138999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.139015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.139025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67150 is same with the state(5) to be set 00:24:24.332 [2024-07-11 07:17:08.139342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.332 [2024-07-11 07:17:08.139794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.332 [2024-07-11 07:17:08.139804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.139813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.139824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.139833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.139843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.139852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.139864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.139873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.139884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.139895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.139906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.139915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.139926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.139935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.139946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.139955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.139965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.139974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.139984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.139993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.333 [2024-07-11 07:17:08.140585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.333 [2024-07-11 07:17:08.140625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.333 [2024-07-11 07:17:08.140646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.333 [2024-07-11 07:17:08.140668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.333 [2024-07-11 07:17:08.140679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.333 [2024-07-11 07:17:08.140688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.140708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.334 [2024-07-11 07:17:08.140726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.334 [2024-07-11 07:17:08.140750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.140771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.140790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.140810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.140831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.140851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.140873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.140893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.140912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.140932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.140951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.140971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.140982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.140991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.334 [2024-07-11 07:17:08.141057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.334 [2024-07-11 07:17:08.141083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.334 [2024-07-11 07:17:08.141140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.334 [2024-07-11 07:17:08.141160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.334 [2024-07-11 07:17:08.141179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.334 [2024-07-11 07:17:08.141219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.334 [2024-07-11 07:17:08.141402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.334 [2024-07-11 07:17:08.141473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.334 [2024-07-11 07:17:08.141512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.334 [2024-07-11 07:17:08.141532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.334 [2024-07-11 07:17:08.141551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.334 [2024-07-11 07:17:08.141562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.335 [2024-07-11 07:17:08.141571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.335 [2024-07-11 07:17:08.141591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.335 [2024-07-11 07:17:08.141610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.335 [2024-07-11 07:17:08.141630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.335 [2024-07-11 07:17:08.141667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.335 [2024-07-11 07:17:08.141685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.335 [2024-07-11 07:17:08.141768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.141983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.141993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.335 [2024-07-11 07:17:08.142001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.142014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e360 is same with the state(5) to be set 00:24:24.335 [2024-07-11 07:17:08.142026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:24.335 [2024-07-11 07:17:08.142034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:24.335 [2024-07-11 07:17:08.142043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6608 len:8 PRP1 0x0 PRP2 0x0 00:24:24.335 [2024-07-11 07:17:08.142052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.335 [2024-07-11 07:17:08.142084] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e4e360 was disconnected and freed. reset controller. 00:24:24.335 [2024-07-11 07:17:08.142289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.335 [2024-07-11 07:17:08.142383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de7170 (9): Bad file descriptor 00:24:24.335 [2024-07-11 07:17:08.142523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.335 [2024-07-11 07:17:08.142574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.335 [2024-07-11 07:17:08.142593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de7170 with addr=10.0.0.2, port=4420 00:24:24.335 [2024-07-11 07:17:08.142604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de7170 is same with the state(5) to be set 00:24:24.335 [2024-07-11 07:17:08.142624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de7170 (9): Bad file descriptor 00:24:24.335 [2024-07-11 07:17:08.142641] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.335 [2024-07-11 07:17:08.142652] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.335 [2024-07-11 07:17:08.142663] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.335 [2024-07-11 07:17:08.142682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.335 [2024-07-11 07:17:08.142695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.335 07:17:08 -- host/timeout.sh@101 -- # sleep 3 00:24:25.271 [2024-07-11 07:17:09.142769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.271 [2024-07-11 07:17:09.142850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.271 [2024-07-11 07:17:09.142871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de7170 with addr=10.0.0.2, port=4420 00:24:25.271 [2024-07-11 07:17:09.142883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de7170 is same with the state(5) to be set 00:24:25.271 [2024-07-11 07:17:09.142903] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de7170 (9): Bad file descriptor 00:24:25.271 [2024-07-11 07:17:09.142921] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.271 [2024-07-11 07:17:09.142931] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.271 [2024-07-11 07:17:09.142941] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.271 [2024-07-11 07:17:09.142961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.271 [2024-07-11 07:17:09.142973] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.206 [2024-07-11 07:17:10.143043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.207 [2024-07-11 07:17:10.143114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.207 [2024-07-11 07:17:10.143135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de7170 with addr=10.0.0.2, port=4420 00:24:26.207 [2024-07-11 07:17:10.143147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de7170 is same with the state(5) to be set 00:24:26.207 [2024-07-11 07:17:10.143166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de7170 (9): Bad file descriptor 00:24:26.207 [2024-07-11 07:17:10.143184] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.207 [2024-07-11 07:17:10.143194] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.207 [2024-07-11 07:17:10.143204] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.207 [2024-07-11 07:17:10.143224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.207 [2024-07-11 07:17:10.143236] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.142 [2024-07-11 07:17:11.144866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.142 [2024-07-11 07:17:11.144947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.142 [2024-07-11 07:17:11.144968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de7170 with addr=10.0.0.2, port=4420 00:24:27.142 [2024-07-11 07:17:11.144986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de7170 is same with the state(5) to be set 00:24:27.142 [2024-07-11 07:17:11.145141] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de7170 (9): Bad file descriptor 00:24:27.142 [2024-07-11 07:17:11.145283] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.142 [2024-07-11 07:17:11.145311] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.142 [2024-07-11 07:17:11.145322] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.142 [2024-07-11 07:17:11.147248] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.142 [2024-07-11 07:17:11.147290] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.142 07:17:11 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.401 [2024-07-11 07:17:11.391555] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.401 07:17:11 -- host/timeout.sh@103 -- # wait 89304 00:24:28.337 [2024-07-11 07:17:12.167881] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:33.605 00:24:33.605 Latency(us) 00:24:33.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.605 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:33.605 Verification LBA range: start 0x0 length 0x4000 00:24:33.605 NVMe0n1 : 10.01 9117.04 35.61 7331.73 0.00 7767.46 733.56 3019898.88 00:24:33.605 =================================================================================================================== 00:24:33.605 Total : 9117.04 35.61 7331.73 0.00 7767.46 0.00 3019898.88 00:24:33.605 0 00:24:33.605 07:17:17 -- host/timeout.sh@105 -- # killprocess 89139 00:24:33.605 07:17:17 -- common/autotest_common.sh@926 -- # '[' -z 89139 ']' 00:24:33.605 07:17:17 -- common/autotest_common.sh@930 -- # kill -0 89139 00:24:33.605 07:17:17 -- common/autotest_common.sh@931 -- # uname 00:24:33.605 07:17:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:33.605 07:17:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89139 00:24:33.605 killing process with pid 89139 00:24:33.605 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.605 00:24:33.605 Latency(us) 00:24:33.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.605 =================================================================================================================== 00:24:33.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.605 07:17:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:33.605 07:17:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:33.605 07:17:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89139' 00:24:33.605 07:17:17 -- common/autotest_common.sh@945 -- # kill 89139 00:24:33.605 07:17:17 -- common/autotest_common.sh@950 -- # wait 89139 00:24:33.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.605 07:17:17 -- host/timeout.sh@110 -- # bdevperf_pid=89425 00:24:33.605 07:17:17 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:33.605 07:17:17 -- host/timeout.sh@112 -- # waitforlisten 89425 /var/tmp/bdevperf.sock 00:24:33.605 07:17:17 -- common/autotest_common.sh@819 -- # '[' -z 89425 ']' 00:24:33.605 07:17:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.605 07:17:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:33.605 07:17:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.605 07:17:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:33.605 07:17:17 -- common/autotest_common.sh@10 -- # set +x 00:24:33.605 [2024-07-11 07:17:17.427370] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:33.605 [2024-07-11 07:17:17.427491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89425 ] 00:24:33.605 [2024-07-11 07:17:17.566685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.605 [2024-07-11 07:17:17.653323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.541 07:17:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:34.541 07:17:18 -- common/autotest_common.sh@852 -- # return 0 00:24:34.541 07:17:18 -- host/timeout.sh@116 -- # dtrace_pid=89453 00:24:34.541 07:17:18 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89425 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:34.541 07:17:18 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:34.541 07:17:18 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:34.801 NVMe0n1 00:24:34.801 07:17:18 -- host/timeout.sh@124 -- # rpc_pid=89511 00:24:34.801 07:17:18 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:34.801 07:17:18 -- host/timeout.sh@125 -- # sleep 1 00:24:35.059 Running I/O for 10 seconds... 00:24:35.996 07:17:19 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.996 [2024-07-11 07:17:20.000652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.000987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b08770 is same with the state(5) to be set 00:24:35.996 [2024-07-11 07:17:20.001603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.996 [2024-07-11 07:17:20.001664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.996 [2024-07-11 07:17:20.001687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.001699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.001729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.001741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.001754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.001765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.001778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.001789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.001802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.001812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.001825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.001836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.001849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.001860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.001873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.001884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.001897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.001909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.001921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.001932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.001944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.001955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.001968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.001978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.001991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.997 [2024-07-11 07:17:20.002753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.997 [2024-07-11 07:17:20.002765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.002776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.002788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.002798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.002810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.002821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.002834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.002844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.002859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.002872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.002886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.002897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.002910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.002921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.002934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.002959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.002972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.002982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.002994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.998 [2024-07-11 07:17:20.003714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.998 [2024-07-11 07:17:20.003725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.003738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.003750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.003762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.003772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.003800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.003811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.003840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.003866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.003894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.003904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.003916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.003926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.003937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.003946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.003957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.003966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.003977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.003986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.003999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.999 [2024-07-11 07:17:20.004742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.999 [2024-07-11 07:17:20.004751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.000 [2024-07-11 07:17:20.004762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.000 [2024-07-11 07:17:20.004771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.000 [2024-07-11 07:17:20.004782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.000 [2024-07-11 07:17:20.004792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.000 [2024-07-11 07:17:20.004803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fa420 is same with the state(5) to be set 00:24:36.000 [2024-07-11 07:17:20.004815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:36.000 [2024-07-11 07:17:20.004823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:36.000 [2024-07-11 07:17:20.004837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109112 len:8 PRP1 0x0 PRP2 0x0 00:24:36.000 [2024-07-11 07:17:20.004846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.000 [2024-07-11 07:17:20.004899] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8fa420 was disconnected and freed. reset controller. 00:24:36.000 [2024-07-11 07:17:20.005155] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.000 [2024-07-11 07:17:20.005241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3170 (9): Bad file descriptor 00:24:36.000 [2024-07-11 07:17:20.005342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.000 [2024-07-11 07:17:20.005395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.000 [2024-07-11 07:17:20.005414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b3170 with addr=10.0.0.2, port=4420 00:24:36.000 [2024-07-11 07:17:20.005425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b3170 is same with the state(5) to be set 00:24:36.000 [2024-07-11 07:17:20.005469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3170 (9): Bad file descriptor 00:24:36.000 [2024-07-11 07:17:20.005490] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.000 [2024-07-11 07:17:20.005501] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.000 [2024-07-11 07:17:20.005511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.000 [2024-07-11 07:17:20.005532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.000 [2024-07-11 07:17:20.005544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.000 07:17:20 -- host/timeout.sh@128 -- # wait 89511 00:24:38.526 [2024-07-11 07:17:22.005657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.526 [2024-07-11 07:17:22.005764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.527 [2024-07-11 07:17:22.005784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b3170 with addr=10.0.0.2, port=4420 00:24:38.527 [2024-07-11 07:17:22.005797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b3170 is same with the state(5) to be set 00:24:38.527 [2024-07-11 07:17:22.005819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3170 (9): Bad file descriptor 00:24:38.527 [2024-07-11 07:17:22.005839] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.527 [2024-07-11 07:17:22.005850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.527 [2024-07-11 07:17:22.005859] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.527 [2024-07-11 07:17:22.005882] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.527 [2024-07-11 07:17:22.005894] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.424 [2024-07-11 07:17:24.005978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.424 [2024-07-11 07:17:24.006059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.424 [2024-07-11 07:17:24.006080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b3170 with addr=10.0.0.2, port=4420 00:24:40.424 [2024-07-11 07:17:24.006091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b3170 is same with the state(5) to be set 00:24:40.424 [2024-07-11 07:17:24.006110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3170 (9): Bad file descriptor 00:24:40.424 [2024-07-11 07:17:24.006139] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.424 [2024-07-11 07:17:24.006151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.424 [2024-07-11 07:17:24.006161] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.424 [2024-07-11 07:17:24.006179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.424 [2024-07-11 07:17:24.006190] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.319 [2024-07-11 07:17:26.006234] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.319 [2024-07-11 07:17:26.006287] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.319 [2024-07-11 07:17:26.006304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.319 [2024-07-11 07:17:26.006326] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:42.319 [2024-07-11 07:17:26.006344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.260 00:24:43.260 Latency(us) 00:24:43.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.260 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:43.260 NVMe0n1 : 8.08 2901.25 11.33 15.83 0.00 43815.78 2055.45 7015926.69 00:24:43.260 =================================================================================================================== 00:24:43.260 Total : 2901.25 11.33 15.83 0.00 43815.78 2055.45 7015926.69 00:24:43.260 0 00:24:43.260 07:17:27 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:43.260 Attaching 5 probes... 00:24:43.260 1142.225749: reset bdev controller NVMe0 00:24:43.260 1142.370129: reconnect bdev controller NVMe0 00:24:43.260 3142.648875: reconnect delay bdev controller NVMe0 00:24:43.260 3142.665849: reconnect bdev controller NVMe0 00:24:43.260 5142.996570: reconnect delay bdev controller NVMe0 00:24:43.260 5143.010244: reconnect bdev controller NVMe0 00:24:43.260 7143.291671: reconnect delay bdev controller NVMe0 00:24:43.260 7143.305439: reconnect bdev controller NVMe0 00:24:43.260 07:17:27 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:43.260 07:17:27 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:43.260 07:17:27 -- host/timeout.sh@136 -- # kill 89453 00:24:43.260 07:17:27 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:43.260 07:17:27 -- host/timeout.sh@139 -- # killprocess 89425 00:24:43.260 07:17:27 -- common/autotest_common.sh@926 -- # '[' -z 89425 ']' 00:24:43.260 07:17:27 -- common/autotest_common.sh@930 -- # kill -0 89425 00:24:43.260 07:17:27 -- common/autotest_common.sh@931 -- # uname 00:24:43.260 07:17:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:43.260 07:17:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89425 00:24:43.260 killing process with pid 89425 00:24:43.260 Received shutdown signal, test time was about 8.143781 seconds 00:24:43.260 00:24:43.260 Latency(us) 00:24:43.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.260 =================================================================================================================== 00:24:43.260 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:43.260 07:17:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:43.260 07:17:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:43.260 07:17:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89425' 00:24:43.260 07:17:27 -- common/autotest_common.sh@945 -- # kill 89425 00:24:43.260 07:17:27 -- common/autotest_common.sh@950 -- # wait 89425 00:24:43.521 07:17:27 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.788 07:17:27 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:43.788 07:17:27 -- host/timeout.sh@145 -- # nvmftestfini 00:24:43.788 07:17:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:43.788 07:17:27 -- nvmf/common.sh@116 -- # sync 00:24:43.788 07:17:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:43.788 07:17:27 -- nvmf/common.sh@119 -- # set +e 00:24:43.788 07:17:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:43.788 07:17:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:43.788 rmmod nvme_tcp 00:24:43.788 rmmod nvme_fabrics 00:24:43.788 rmmod nvme_keyring 00:24:43.788 07:17:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:43.788 07:17:27 -- nvmf/common.sh@123 -- # set -e 00:24:43.788 07:17:27 -- nvmf/common.sh@124 -- # return 0 00:24:43.788 07:17:27 -- nvmf/common.sh@477 -- # '[' -n 88852 ']' 00:24:43.788 07:17:27 -- nvmf/common.sh@478 -- # killprocess 88852 00:24:43.788 07:17:27 -- common/autotest_common.sh@926 -- # '[' -z 88852 ']' 00:24:43.788 07:17:27 -- common/autotest_common.sh@930 -- # kill -0 88852 00:24:43.788 07:17:27 -- common/autotest_common.sh@931 -- # uname 00:24:43.788 07:17:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:43.788 07:17:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88852 00:24:43.788 killing process with pid 88852 00:24:43.788 07:17:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:43.788 07:17:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:43.788 07:17:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88852' 00:24:43.788 07:17:27 -- common/autotest_common.sh@945 -- # kill 88852 00:24:43.788 07:17:27 -- common/autotest_common.sh@950 -- # wait 88852 00:24:44.095 07:17:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:44.095 07:17:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:44.095 07:17:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:44.095 07:17:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:44.095 07:17:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:44.095 07:17:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.095 07:17:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.095 07:17:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.095 07:17:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:44.095 00:24:44.095 real 0m46.469s 00:24:44.095 user 2m15.456s 00:24:44.095 sys 0m5.441s 00:24:44.095 07:17:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:44.095 07:17:28 -- common/autotest_common.sh@10 -- # set +x 00:24:44.095 ************************************ 00:24:44.095 END TEST nvmf_timeout 00:24:44.095 ************************************ 00:24:44.095 07:17:28 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:24:44.095 07:17:28 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:24:44.095 07:17:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:44.095 07:17:28 -- common/autotest_common.sh@10 -- # set +x 00:24:44.376 07:17:28 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:44.376 00:24:44.376 real 18m12.367s 00:24:44.376 user 58m18.462s 00:24:44.376 sys 3m46.568s 00:24:44.376 07:17:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:44.376 07:17:28 -- common/autotest_common.sh@10 -- # set +x 00:24:44.376 ************************************ 00:24:44.376 END TEST nvmf_tcp 00:24:44.376 ************************************ 00:24:44.376 07:17:28 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:24:44.376 07:17:28 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:44.376 07:17:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:44.376 07:17:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:44.376 07:17:28 -- common/autotest_common.sh@10 -- # set +x 00:24:44.376 ************************************ 00:24:44.376 START TEST spdkcli_nvmf_tcp 00:24:44.376 ************************************ 00:24:44.376 07:17:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:44.376 * Looking for test storage... 00:24:44.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:44.376 07:17:28 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:44.376 07:17:28 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:44.376 07:17:28 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:44.376 07:17:28 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:44.376 07:17:28 -- nvmf/common.sh@7 -- # uname -s 00:24:44.376 07:17:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.376 07:17:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.376 07:17:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.376 07:17:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.376 07:17:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.376 07:17:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.376 07:17:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.376 07:17:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.376 07:17:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.376 07:17:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.376 07:17:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:24:44.376 07:17:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:24:44.376 07:17:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.376 07:17:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.376 07:17:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:44.376 07:17:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:44.376 07:17:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.376 07:17:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.376 07:17:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.376 07:17:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.376 07:17:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.376 07:17:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.376 07:17:28 -- paths/export.sh@5 -- # export PATH 00:24:44.377 07:17:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.377 07:17:28 -- nvmf/common.sh@46 -- # : 0 00:24:44.377 07:17:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:44.377 07:17:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:44.377 07:17:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:44.377 07:17:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.377 07:17:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.377 07:17:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:44.377 07:17:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:44.377 07:17:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:44.377 07:17:28 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:44.377 07:17:28 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:44.377 07:17:28 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:44.377 07:17:28 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:44.377 07:17:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:44.377 07:17:28 -- common/autotest_common.sh@10 -- # set +x 00:24:44.377 07:17:28 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:44.377 07:17:28 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=89723 00:24:44.377 07:17:28 -- spdkcli/common.sh@34 -- # waitforlisten 89723 00:24:44.377 07:17:28 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:44.377 07:17:28 -- common/autotest_common.sh@819 -- # '[' -z 89723 ']' 00:24:44.377 07:17:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.377 07:17:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:44.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.377 07:17:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.377 07:17:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:44.377 07:17:28 -- common/autotest_common.sh@10 -- # set +x 00:24:44.377 [2024-07-11 07:17:28.374205] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:44.377 [2024-07-11 07:17:28.374354] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89723 ] 00:24:44.636 [2024-07-11 07:17:28.508133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:44.636 [2024-07-11 07:17:28.606642] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:44.636 [2024-07-11 07:17:28.606992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.636 [2024-07-11 07:17:28.607014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.570 07:17:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:45.570 07:17:29 -- common/autotest_common.sh@852 -- # return 0 00:24:45.570 07:17:29 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:45.570 07:17:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:45.570 07:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:45.570 07:17:29 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:45.570 07:17:29 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:45.570 07:17:29 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:45.570 07:17:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:45.570 07:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:45.570 07:17:29 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:45.570 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:45.570 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:45.570 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:45.570 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:45.570 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:45.570 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:45.570 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:45.570 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:45.570 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:45.570 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:45.570 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:45.570 ' 00:24:45.828 [2024-07-11 07:17:29.754598] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:48.361 [2024-07-11 07:17:31.955096] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.296 [2024-07-11 07:17:33.228896] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:51.823 [2024-07-11 07:17:35.576172] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:53.727 [2024-07-11 07:17:37.607052] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:55.104 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:55.104 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:55.104 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:55.104 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:55.104 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:55.104 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:55.104 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:55.104 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:55.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:55.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:55.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:55.104 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:55.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:55.104 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:55.104 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:55.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:55.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:55.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:55.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:55.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:55.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:55.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:55.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:55.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:55.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:55.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:55.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:55.105 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:55.363 07:17:39 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:55.363 07:17:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:55.363 07:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:55.363 07:17:39 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:55.363 07:17:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:55.363 07:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:55.363 07:17:39 -- spdkcli/nvmf.sh@69 -- # check_match 00:24:55.363 07:17:39 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:24:55.931 07:17:39 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:55.931 07:17:39 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:55.931 07:17:39 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:55.931 07:17:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:55.931 07:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:55.931 07:17:39 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:55.931 07:17:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:55.931 07:17:39 -- common/autotest_common.sh@10 -- # set +x 00:24:55.931 07:17:39 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:55.931 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:55.931 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:55.931 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:55.931 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:55.931 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:55.931 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:55.931 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:55.931 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:55.931 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:55.931 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:55.931 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:55.931 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:55.931 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:55.931 ' 00:25:01.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:01.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:01.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:01.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:01.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:01.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:01.197 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:01.197 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:01.197 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:01.197 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:01.197 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:01.197 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:01.197 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:01.197 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:01.197 07:17:45 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:01.197 07:17:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:01.197 07:17:45 -- common/autotest_common.sh@10 -- # set +x 00:25:01.455 07:17:45 -- spdkcli/nvmf.sh@90 -- # killprocess 89723 00:25:01.455 07:17:45 -- common/autotest_common.sh@926 -- # '[' -z 89723 ']' 00:25:01.455 07:17:45 -- common/autotest_common.sh@930 -- # kill -0 89723 00:25:01.455 07:17:45 -- common/autotest_common.sh@931 -- # uname 00:25:01.455 07:17:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:01.455 07:17:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89723 00:25:01.455 killing process with pid 89723 00:25:01.455 07:17:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:01.455 07:17:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:01.455 07:17:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89723' 00:25:01.455 07:17:45 -- common/autotest_common.sh@945 -- # kill 89723 00:25:01.455 [2024-07-11 07:17:45.296439] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:01.455 07:17:45 -- common/autotest_common.sh@950 -- # wait 89723 00:25:01.714 07:17:45 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:01.714 07:17:45 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:01.714 07:17:45 -- spdkcli/common.sh@13 -- # '[' -n 89723 ']' 00:25:01.714 07:17:45 -- spdkcli/common.sh@14 -- # killprocess 89723 00:25:01.714 07:17:45 -- common/autotest_common.sh@926 -- # '[' -z 89723 ']' 00:25:01.714 07:17:45 -- common/autotest_common.sh@930 -- # kill -0 89723 00:25:01.714 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (89723) - No such process 00:25:01.714 Process with pid 89723 is not found 00:25:01.714 07:17:45 -- common/autotest_common.sh@953 -- # echo 'Process with pid 89723 is not found' 00:25:01.714 07:17:45 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:01.714 07:17:45 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:01.714 07:17:45 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:01.714 ************************************ 00:25:01.714 END TEST spdkcli_nvmf_tcp 00:25:01.714 ************************************ 00:25:01.714 00:25:01.714 real 0m17.320s 00:25:01.714 user 0m37.118s 00:25:01.714 sys 0m0.963s 00:25:01.714 07:17:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.714 07:17:45 -- common/autotest_common.sh@10 -- # set +x 00:25:01.714 07:17:45 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:01.714 07:17:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:01.714 07:17:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:01.714 07:17:45 -- common/autotest_common.sh@10 -- # set +x 00:25:01.714 ************************************ 00:25:01.714 START TEST nvmf_identify_passthru 00:25:01.714 ************************************ 00:25:01.714 07:17:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:01.714 * Looking for test storage... 00:25:01.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:01.714 07:17:45 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:01.714 07:17:45 -- nvmf/common.sh@7 -- # uname -s 00:25:01.714 07:17:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.714 07:17:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.714 07:17:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.714 07:17:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.714 07:17:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.714 07:17:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.714 07:17:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.714 07:17:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.714 07:17:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.714 07:17:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.714 07:17:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:25:01.714 07:17:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:25:01.714 07:17:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.715 07:17:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.715 07:17:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:01.715 07:17:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:01.715 07:17:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.715 07:17:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.715 07:17:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.715 07:17:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.715 07:17:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.715 07:17:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.715 07:17:45 -- paths/export.sh@5 -- # export PATH 00:25:01.715 07:17:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.715 07:17:45 -- nvmf/common.sh@46 -- # : 0 00:25:01.715 07:17:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:01.715 07:17:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:01.715 07:17:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:01.715 07:17:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.715 07:17:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.715 07:17:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:01.715 07:17:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:01.715 07:17:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:01.715 07:17:45 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:01.715 07:17:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.715 07:17:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.715 07:17:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.715 07:17:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.715 07:17:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.715 07:17:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.715 07:17:45 -- paths/export.sh@5 -- # export PATH 00:25:01.715 07:17:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.715 07:17:45 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:01.715 07:17:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:01.715 07:17:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.715 07:17:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:01.715 07:17:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:01.715 07:17:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:01.715 07:17:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.715 07:17:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:01.715 07:17:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.715 07:17:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:01.715 07:17:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:01.715 07:17:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:01.715 07:17:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:01.715 07:17:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:01.715 07:17:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:01.715 07:17:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.715 07:17:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.715 07:17:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:01.715 07:17:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:01.715 07:17:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:01.715 07:17:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:01.715 07:17:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:01.715 07:17:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.715 07:17:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:01.715 07:17:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:01.715 07:17:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:01.715 07:17:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:01.715 07:17:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:01.715 07:17:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:01.715 Cannot find device "nvmf_tgt_br" 00:25:01.715 07:17:45 -- nvmf/common.sh@154 -- # true 00:25:01.715 07:17:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:01.715 Cannot find device "nvmf_tgt_br2" 00:25:01.715 07:17:45 -- nvmf/common.sh@155 -- # true 00:25:01.715 07:17:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:01.715 07:17:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:01.715 Cannot find device "nvmf_tgt_br" 00:25:01.715 07:17:45 -- nvmf/common.sh@157 -- # true 00:25:01.715 07:17:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:01.715 Cannot find device "nvmf_tgt_br2" 00:25:01.715 07:17:45 -- nvmf/common.sh@158 -- # true 00:25:01.715 07:17:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:01.974 07:17:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:01.974 07:17:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:01.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:01.974 07:17:45 -- nvmf/common.sh@161 -- # true 00:25:01.974 07:17:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:01.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:01.974 07:17:45 -- nvmf/common.sh@162 -- # true 00:25:01.974 07:17:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:01.974 07:17:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:01.974 07:17:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:01.974 07:17:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:01.974 07:17:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:01.974 07:17:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:01.974 07:17:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:01.974 07:17:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:01.974 07:17:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:01.974 07:17:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:01.974 07:17:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:01.974 07:17:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:01.974 07:17:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:01.974 07:17:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:01.974 07:17:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:01.974 07:17:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:01.974 07:17:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:01.974 07:17:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:01.974 07:17:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:01.974 07:17:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:01.974 07:17:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:01.974 07:17:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:01.974 07:17:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:01.974 07:17:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:01.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:25:01.974 00:25:01.974 --- 10.0.0.2 ping statistics --- 00:25:01.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.974 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:25:01.974 07:17:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:01.974 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:01.974 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:25:01.974 00:25:01.974 --- 10.0.0.3 ping statistics --- 00:25:01.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.974 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:01.974 07:17:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:01.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:25:01.974 00:25:01.974 --- 10.0.0.1 ping statistics --- 00:25:01.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.974 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:25:01.974 07:17:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.974 07:17:45 -- nvmf/common.sh@421 -- # return 0 00:25:01.974 07:17:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:01.974 07:17:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.974 07:17:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:01.974 07:17:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:01.974 07:17:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.974 07:17:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:01.974 07:17:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:01.974 07:17:46 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:01.974 07:17:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:01.974 07:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:01.974 07:17:46 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:01.974 07:17:46 -- common/autotest_common.sh@1509 -- # bdfs=() 00:25:01.974 07:17:46 -- common/autotest_common.sh@1509 -- # local bdfs 00:25:01.974 07:17:46 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:25:01.974 07:17:46 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:25:01.974 07:17:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:01.974 07:17:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:01.974 07:17:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:01.974 07:17:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:01.974 07:17:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:02.233 07:17:46 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:02.233 07:17:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:02.233 07:17:46 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:25:02.233 07:17:46 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:02.233 07:17:46 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:02.233 07:17:46 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:02.233 07:17:46 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:02.233 07:17:46 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:02.233 07:17:46 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:02.233 07:17:46 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:02.233 07:17:46 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:02.233 07:17:46 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:02.492 07:17:46 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:02.492 07:17:46 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:02.492 07:17:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:02.492 07:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:02.492 07:17:46 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:02.492 07:17:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:02.492 07:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:02.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.492 07:17:46 -- target/identify_passthru.sh@31 -- # nvmfpid=90219 00:25:02.492 07:17:46 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:02.492 07:17:46 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:02.492 07:17:46 -- target/identify_passthru.sh@35 -- # waitforlisten 90219 00:25:02.492 07:17:46 -- common/autotest_common.sh@819 -- # '[' -z 90219 ']' 00:25:02.492 07:17:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.492 07:17:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:02.492 07:17:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.492 07:17:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:02.492 07:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:02.492 [2024-07-11 07:17:46.521515] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:02.492 [2024-07-11 07:17:46.521744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.750 [2024-07-11 07:17:46.660627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:02.750 [2024-07-11 07:17:46.741885] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:02.750 [2024-07-11 07:17:46.742266] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.750 [2024-07-11 07:17:46.742429] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.750 [2024-07-11 07:17:46.742602] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.750 [2024-07-11 07:17:46.742907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.750 [2024-07-11 07:17:46.743004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.750 [2024-07-11 07:17:46.743175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.750 [2024-07-11 07:17:46.743181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.685 07:17:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:03.685 07:17:47 -- common/autotest_common.sh@852 -- # return 0 00:25:03.685 07:17:47 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:03.685 07:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.685 07:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.685 07:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.685 07:17:47 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:03.685 07:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.685 07:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.685 [2024-07-11 07:17:47.612937] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:03.685 07:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.685 07:17:47 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:03.685 07:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.685 07:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.685 [2024-07-11 07:17:47.627634] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.685 07:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.685 07:17:47 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:03.685 07:17:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:03.685 07:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.685 07:17:47 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:03.685 07:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.685 07:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.944 Nvme0n1 00:25:03.944 07:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.944 07:17:47 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:03.944 07:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.944 07:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.944 07:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.944 07:17:47 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:03.944 07:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.944 07:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.944 07:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.944 07:17:47 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.944 07:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.944 07:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.944 [2024-07-11 07:17:47.767241] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.944 07:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.944 07:17:47 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:03.944 07:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.944 07:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:03.944 [2024-07-11 07:17:47.775009] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:03.944 [ 00:25:03.944 { 00:25:03.944 "allow_any_host": true, 00:25:03.944 "hosts": [], 00:25:03.944 "listen_addresses": [], 00:25:03.944 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:03.944 "subtype": "Discovery" 00:25:03.944 }, 00:25:03.944 { 00:25:03.944 "allow_any_host": true, 00:25:03.944 "hosts": [], 00:25:03.944 "listen_addresses": [ 00:25:03.944 { 00:25:03.944 "adrfam": "IPv4", 00:25:03.944 "traddr": "10.0.0.2", 00:25:03.944 "transport": "TCP", 00:25:03.944 "trsvcid": "4420", 00:25:03.944 "trtype": "TCP" 00:25:03.944 } 00:25:03.944 ], 00:25:03.944 "max_cntlid": 65519, 00:25:03.944 "max_namespaces": 1, 00:25:03.944 "min_cntlid": 1, 00:25:03.944 "model_number": "SPDK bdev Controller", 00:25:03.944 "namespaces": [ 00:25:03.944 { 00:25:03.944 "bdev_name": "Nvme0n1", 00:25:03.944 "name": "Nvme0n1", 00:25:03.944 "nguid": "BAD2E499D07D44CD85B7C2EFD1B5E288", 00:25:03.944 "nsid": 1, 00:25:03.944 "uuid": "bad2e499-d07d-44cd-85b7-c2efd1b5e288" 00:25:03.944 } 00:25:03.944 ], 00:25:03.944 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.944 "serial_number": "SPDK00000000000001", 00:25:03.944 "subtype": "NVMe" 00:25:03.944 } 00:25:03.944 ] 00:25:03.944 07:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.944 07:17:47 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:03.944 07:17:47 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:03.944 07:17:47 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:03.945 07:17:47 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:03.945 07:17:47 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:03.945 07:17:47 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:03.945 07:17:47 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:04.203 07:17:48 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:04.203 07:17:48 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:04.203 07:17:48 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:04.203 07:17:48 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:04.203 07:17:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.203 07:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:04.203 07:17:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.203 07:17:48 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:04.203 07:17:48 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:04.203 07:17:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:04.203 07:17:48 -- nvmf/common.sh@116 -- # sync 00:25:04.462 07:17:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:04.462 07:17:48 -- nvmf/common.sh@119 -- # set +e 00:25:04.462 07:17:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:04.462 07:17:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:04.462 rmmod nvme_tcp 00:25:04.462 rmmod nvme_fabrics 00:25:04.462 rmmod nvme_keyring 00:25:04.462 07:17:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:04.462 07:17:48 -- nvmf/common.sh@123 -- # set -e 00:25:04.462 07:17:48 -- nvmf/common.sh@124 -- # return 0 00:25:04.462 07:17:48 -- nvmf/common.sh@477 -- # '[' -n 90219 ']' 00:25:04.462 07:17:48 -- nvmf/common.sh@478 -- # killprocess 90219 00:25:04.462 07:17:48 -- common/autotest_common.sh@926 -- # '[' -z 90219 ']' 00:25:04.462 07:17:48 -- common/autotest_common.sh@930 -- # kill -0 90219 00:25:04.462 07:17:48 -- common/autotest_common.sh@931 -- # uname 00:25:04.462 07:17:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:04.462 07:17:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90219 00:25:04.462 killing process with pid 90219 00:25:04.462 07:17:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:04.462 07:17:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:04.462 07:17:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90219' 00:25:04.462 07:17:48 -- common/autotest_common.sh@945 -- # kill 90219 00:25:04.462 [2024-07-11 07:17:48.368745] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:04.462 07:17:48 -- common/autotest_common.sh@950 -- # wait 90219 00:25:04.721 07:17:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:04.721 07:17:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:04.721 07:17:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:04.721 07:17:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:04.721 07:17:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:04.721 07:17:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.721 07:17:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:04.721 07:17:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.721 07:17:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:04.721 ************************************ 00:25:04.721 END TEST nvmf_identify_passthru 00:25:04.721 ************************************ 00:25:04.721 00:25:04.721 real 0m3.075s 00:25:04.721 user 0m7.841s 00:25:04.721 sys 0m0.799s 00:25:04.721 07:17:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:04.721 07:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:04.721 07:17:48 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:04.721 07:17:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:04.721 07:17:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:04.721 07:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:04.721 ************************************ 00:25:04.721 START TEST nvmf_dif 00:25:04.721 ************************************ 00:25:04.721 07:17:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:04.721 * Looking for test storage... 00:25:04.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:04.721 07:17:48 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:04.721 07:17:48 -- nvmf/common.sh@7 -- # uname -s 00:25:04.980 07:17:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.980 07:17:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.980 07:17:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.980 07:17:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.980 07:17:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.980 07:17:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.980 07:17:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.980 07:17:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.980 07:17:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.980 07:17:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.980 07:17:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:25:04.980 07:17:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:25:04.980 07:17:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.980 07:17:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.980 07:17:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:04.980 07:17:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:04.980 07:17:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.980 07:17:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.980 07:17:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.980 07:17:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.980 07:17:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.980 07:17:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.980 07:17:48 -- paths/export.sh@5 -- # export PATH 00:25:04.980 07:17:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.980 07:17:48 -- nvmf/common.sh@46 -- # : 0 00:25:04.980 07:17:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:04.980 07:17:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:04.980 07:17:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:04.980 07:17:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.980 07:17:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.980 07:17:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:04.980 07:17:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:04.980 07:17:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:04.980 07:17:48 -- target/dif.sh@15 -- # NULL_META=16 00:25:04.980 07:17:48 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:04.980 07:17:48 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:04.980 07:17:48 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:04.980 07:17:48 -- target/dif.sh@135 -- # nvmftestinit 00:25:04.980 07:17:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:04.980 07:17:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.980 07:17:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:04.980 07:17:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:04.980 07:17:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:04.980 07:17:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.980 07:17:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:04.980 07:17:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.980 07:17:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:04.980 07:17:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:04.980 07:17:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:04.980 07:17:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:04.980 07:17:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:04.980 07:17:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:04.980 07:17:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:04.980 07:17:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:04.980 07:17:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:04.980 07:17:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:04.980 07:17:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:04.980 07:17:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:04.980 07:17:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:04.980 07:17:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:04.980 07:17:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:04.980 07:17:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:04.980 07:17:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:04.980 07:17:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:04.980 07:17:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:04.980 07:17:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:04.980 Cannot find device "nvmf_tgt_br" 00:25:04.980 07:17:48 -- nvmf/common.sh@154 -- # true 00:25:04.980 07:17:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:04.980 Cannot find device "nvmf_tgt_br2" 00:25:04.980 07:17:48 -- nvmf/common.sh@155 -- # true 00:25:04.980 07:17:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:04.980 07:17:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:04.980 Cannot find device "nvmf_tgt_br" 00:25:04.980 07:17:48 -- nvmf/common.sh@157 -- # true 00:25:04.980 07:17:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:04.980 Cannot find device "nvmf_tgt_br2" 00:25:04.980 07:17:48 -- nvmf/common.sh@158 -- # true 00:25:04.980 07:17:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:04.980 07:17:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:04.980 07:17:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:04.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:04.980 07:17:48 -- nvmf/common.sh@161 -- # true 00:25:04.980 07:17:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:04.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:04.980 07:17:48 -- nvmf/common.sh@162 -- # true 00:25:04.980 07:17:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:04.981 07:17:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:04.981 07:17:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:04.981 07:17:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:04.981 07:17:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:04.981 07:17:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:04.981 07:17:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:04.981 07:17:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:04.981 07:17:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:04.981 07:17:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:04.981 07:17:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:05.239 07:17:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:05.239 07:17:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:05.239 07:17:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:05.239 07:17:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:05.239 07:17:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:05.239 07:17:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:05.239 07:17:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:05.239 07:17:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:05.239 07:17:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:05.239 07:17:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:05.239 07:17:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:05.239 07:17:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:05.239 07:17:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:05.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:25:05.239 00:25:05.239 --- 10.0.0.2 ping statistics --- 00:25:05.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.239 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:25:05.239 07:17:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:05.239 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:05.239 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:25:05.239 00:25:05.239 --- 10.0.0.3 ping statistics --- 00:25:05.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.239 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:25:05.239 07:17:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:05.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:25:05.239 00:25:05.239 --- 10.0.0.1 ping statistics --- 00:25:05.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.239 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:25:05.239 07:17:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.239 07:17:49 -- nvmf/common.sh@421 -- # return 0 00:25:05.239 07:17:49 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:05.239 07:17:49 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:05.498 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:05.498 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:05.498 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:05.498 07:17:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.498 07:17:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:05.498 07:17:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:05.498 07:17:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.498 07:17:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:05.498 07:17:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:05.498 07:17:49 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:05.498 07:17:49 -- target/dif.sh@137 -- # nvmfappstart 00:25:05.498 07:17:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:05.498 07:17:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:05.498 07:17:49 -- common/autotest_common.sh@10 -- # set +x 00:25:05.758 07:17:49 -- nvmf/common.sh@469 -- # nvmfpid=90573 00:25:05.758 07:17:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:05.758 07:17:49 -- nvmf/common.sh@470 -- # waitforlisten 90573 00:25:05.758 07:17:49 -- common/autotest_common.sh@819 -- # '[' -z 90573 ']' 00:25:05.758 07:17:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.758 07:17:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:05.758 07:17:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.758 07:17:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:05.758 07:17:49 -- common/autotest_common.sh@10 -- # set +x 00:25:05.758 [2024-07-11 07:17:49.615308] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:05.758 [2024-07-11 07:17:49.615393] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.758 [2024-07-11 07:17:49.755675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.017 [2024-07-11 07:17:49.843865] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:06.017 [2024-07-11 07:17:49.843983] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.017 [2024-07-11 07:17:49.843995] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.017 [2024-07-11 07:17:49.844003] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.017 [2024-07-11 07:17:49.844031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.583 07:17:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:06.584 07:17:50 -- common/autotest_common.sh@852 -- # return 0 00:25:06.584 07:17:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:06.584 07:17:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:06.584 07:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:06.843 07:17:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.843 07:17:50 -- target/dif.sh@139 -- # create_transport 00:25:06.843 07:17:50 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:06.843 07:17:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.843 07:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:06.843 [2024-07-11 07:17:50.658090] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.843 07:17:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.843 07:17:50 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:06.843 07:17:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:06.843 07:17:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:06.843 07:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:06.843 ************************************ 00:25:06.843 START TEST fio_dif_1_default 00:25:06.843 ************************************ 00:25:06.843 07:17:50 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:25:06.843 07:17:50 -- target/dif.sh@86 -- # create_subsystems 0 00:25:06.843 07:17:50 -- target/dif.sh@28 -- # local sub 00:25:06.843 07:17:50 -- target/dif.sh@30 -- # for sub in "$@" 00:25:06.843 07:17:50 -- target/dif.sh@31 -- # create_subsystem 0 00:25:06.843 07:17:50 -- target/dif.sh@18 -- # local sub_id=0 00:25:06.843 07:17:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:06.843 07:17:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.843 07:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:06.843 bdev_null0 00:25:06.843 07:17:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.843 07:17:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:06.843 07:17:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.843 07:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:06.843 07:17:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.843 07:17:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:06.843 07:17:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.843 07:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:06.843 07:17:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.843 07:17:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:06.843 07:17:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.843 07:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:06.843 [2024-07-11 07:17:50.710277] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.843 07:17:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.843 07:17:50 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:06.843 07:17:50 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:06.843 07:17:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:06.843 07:17:50 -- nvmf/common.sh@520 -- # config=() 00:25:06.843 07:17:50 -- nvmf/common.sh@520 -- # local subsystem config 00:25:06.843 07:17:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:06.843 07:17:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:06.843 07:17:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:06.843 { 00:25:06.843 "params": { 00:25:06.843 "name": "Nvme$subsystem", 00:25:06.843 "trtype": "$TEST_TRANSPORT", 00:25:06.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.843 "adrfam": "ipv4", 00:25:06.843 "trsvcid": "$NVMF_PORT", 00:25:06.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.843 "hdgst": ${hdgst:-false}, 00:25:06.843 "ddgst": ${ddgst:-false} 00:25:06.843 }, 00:25:06.843 "method": "bdev_nvme_attach_controller" 00:25:06.843 } 00:25:06.843 EOF 00:25:06.843 )") 00:25:06.843 07:17:50 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:06.843 07:17:50 -- target/dif.sh@82 -- # gen_fio_conf 00:25:06.843 07:17:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:06.843 07:17:50 -- target/dif.sh@54 -- # local file 00:25:06.843 07:17:50 -- target/dif.sh@56 -- # cat 00:25:06.843 07:17:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:06.843 07:17:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:06.843 07:17:50 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:06.843 07:17:50 -- nvmf/common.sh@542 -- # cat 00:25:06.843 07:17:50 -- common/autotest_common.sh@1320 -- # shift 00:25:06.843 07:17:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:06.843 07:17:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:06.843 07:17:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:06.843 07:17:50 -- target/dif.sh@72 -- # (( file <= files )) 00:25:06.843 07:17:50 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:06.843 07:17:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:06.843 07:17:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:06.843 07:17:50 -- nvmf/common.sh@544 -- # jq . 00:25:06.843 07:17:50 -- nvmf/common.sh@545 -- # IFS=, 00:25:06.843 07:17:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:06.843 "params": { 00:25:06.843 "name": "Nvme0", 00:25:06.843 "trtype": "tcp", 00:25:06.843 "traddr": "10.0.0.2", 00:25:06.843 "adrfam": "ipv4", 00:25:06.843 "trsvcid": "4420", 00:25:06.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:06.843 "hdgst": false, 00:25:06.843 "ddgst": false 00:25:06.843 }, 00:25:06.843 "method": "bdev_nvme_attach_controller" 00:25:06.843 }' 00:25:06.843 07:17:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:06.843 07:17:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:06.843 07:17:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:06.843 07:17:50 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:06.843 07:17:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:06.843 07:17:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:06.843 07:17:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:06.843 07:17:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:06.843 07:17:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:06.843 07:17:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:07.102 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:07.102 fio-3.35 00:25:07.102 Starting 1 thread 00:25:07.361 [2024-07-11 07:17:51.342745] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:07.361 [2024-07-11 07:17:51.342811] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:19.594 00:25:19.594 filename0: (groupid=0, jobs=1): err= 0: pid=90659: Thu Jul 11 07:18:01 2024 00:25:19.594 read: IOPS=4543, BW=17.7MiB/s (18.6MB/s)(177MiB/10001msec) 00:25:19.594 slat (usec): min=5, max=254, avg= 7.14, stdev= 3.85 00:25:19.594 clat (usec): min=346, max=42001, avg=858.70, stdev=4264.99 00:25:19.594 lat (usec): min=351, max=42011, avg=865.84, stdev=4265.13 00:25:19.594 clat percentiles (usec): 00:25:19.594 | 1.00th=[ 375], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 388], 00:25:19.594 | 30.00th=[ 392], 40.00th=[ 396], 50.00th=[ 400], 60.00th=[ 404], 00:25:19.594 | 70.00th=[ 412], 80.00th=[ 420], 90.00th=[ 437], 95.00th=[ 465], 00:25:19.594 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:25:19.594 | 99.99th=[41681] 00:25:19.594 bw ( KiB/s): min= 6400, max=28000, per=100.00%, avg=18304.00, stdev=5753.14, samples=19 00:25:19.594 iops : min= 1600, max= 7000, avg=4576.00, stdev=1438.29, samples=19 00:25:19.594 lat (usec) : 500=96.51%, 750=2.30%, 1000=0.05% 00:25:19.594 lat (msec) : 2=0.02%, 4=0.01%, 50=1.11% 00:25:19.594 cpu : usr=87.80%, sys=9.87%, ctx=117, majf=0, minf=0 00:25:19.594 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:19.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.594 issued rwts: total=45436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:19.594 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:19.594 00:25:19.594 Run status group 0 (all jobs): 00:25:19.594 READ: bw=17.7MiB/s (18.6MB/s), 17.7MiB/s-17.7MiB/s (18.6MB/s-18.6MB/s), io=177MiB (186MB), run=10001-10001msec 00:25:19.594 07:18:01 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:19.594 07:18:01 -- target/dif.sh@43 -- # local sub 00:25:19.594 07:18:01 -- target/dif.sh@45 -- # for sub in "$@" 00:25:19.594 07:18:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:19.594 07:18:01 -- target/dif.sh@36 -- # local sub_id=0 00:25:19.594 07:18:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:19.594 07:18:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.594 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:19.594 07:18:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.594 07:18:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:19.594 07:18:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.594 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:19.594 ************************************ 00:25:19.594 END TEST fio_dif_1_default 00:25:19.594 ************************************ 00:25:19.594 07:18:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.594 00:25:19.594 real 0m11.024s 00:25:19.594 user 0m9.448s 00:25:19.594 sys 0m1.256s 00:25:19.594 07:18:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.594 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:19.594 07:18:01 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:19.594 07:18:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:19.594 07:18:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:19.594 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:19.594 ************************************ 00:25:19.594 START TEST fio_dif_1_multi_subsystems 00:25:19.594 ************************************ 00:25:19.594 07:18:01 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:25:19.594 07:18:01 -- target/dif.sh@92 -- # local files=1 00:25:19.594 07:18:01 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:19.594 07:18:01 -- target/dif.sh@28 -- # local sub 00:25:19.594 07:18:01 -- target/dif.sh@30 -- # for sub in "$@" 00:25:19.594 07:18:01 -- target/dif.sh@31 -- # create_subsystem 0 00:25:19.594 07:18:01 -- target/dif.sh@18 -- # local sub_id=0 00:25:19.594 07:18:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:19.594 07:18:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.594 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:19.594 bdev_null0 00:25:19.594 07:18:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.594 07:18:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:19.594 07:18:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.594 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:19.594 07:18:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.594 07:18:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:19.594 07:18:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.594 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:19.594 07:18:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.594 07:18:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:19.594 07:18:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.594 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:19.594 [2024-07-11 07:18:01.781579] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.594 07:18:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.594 07:18:01 -- target/dif.sh@30 -- # for sub in "$@" 00:25:19.594 07:18:01 -- target/dif.sh@31 -- # create_subsystem 1 00:25:19.594 07:18:01 -- target/dif.sh@18 -- # local sub_id=1 00:25:19.594 07:18:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:19.594 07:18:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.594 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:19.594 bdev_null1 00:25:19.594 07:18:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.594 07:18:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:19.594 07:18:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.595 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:19.595 07:18:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.595 07:18:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:19.595 07:18:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.595 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:19.595 07:18:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.595 07:18:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.595 07:18:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.595 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:19.595 07:18:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.595 07:18:01 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:19.595 07:18:01 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:19.595 07:18:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:19.595 07:18:01 -- nvmf/common.sh@520 -- # config=() 00:25:19.595 07:18:01 -- nvmf/common.sh@520 -- # local subsystem config 00:25:19.595 07:18:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:19.595 07:18:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:19.595 { 00:25:19.595 "params": { 00:25:19.595 "name": "Nvme$subsystem", 00:25:19.595 "trtype": "$TEST_TRANSPORT", 00:25:19.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.595 "adrfam": "ipv4", 00:25:19.595 "trsvcid": "$NVMF_PORT", 00:25:19.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.595 "hdgst": ${hdgst:-false}, 00:25:19.595 "ddgst": ${ddgst:-false} 00:25:19.595 }, 00:25:19.595 "method": "bdev_nvme_attach_controller" 00:25:19.595 } 00:25:19.595 EOF 00:25:19.595 )") 00:25:19.595 07:18:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:19.595 07:18:01 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:19.595 07:18:01 -- target/dif.sh@82 -- # gen_fio_conf 00:25:19.595 07:18:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:19.595 07:18:01 -- target/dif.sh@54 -- # local file 00:25:19.595 07:18:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:19.595 07:18:01 -- target/dif.sh@56 -- # cat 00:25:19.595 07:18:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:19.595 07:18:01 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:19.595 07:18:01 -- nvmf/common.sh@542 -- # cat 00:25:19.595 07:18:01 -- common/autotest_common.sh@1320 -- # shift 00:25:19.595 07:18:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:19.595 07:18:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:19.595 07:18:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:19.595 07:18:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:19.595 07:18:01 -- target/dif.sh@72 -- # (( file <= files )) 00:25:19.595 07:18:01 -- target/dif.sh@73 -- # cat 00:25:19.595 07:18:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:19.595 07:18:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:19.595 07:18:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:19.595 07:18:01 -- target/dif.sh@72 -- # (( file++ )) 00:25:19.595 07:18:01 -- target/dif.sh@72 -- # (( file <= files )) 00:25:19.595 07:18:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:19.595 { 00:25:19.595 "params": { 00:25:19.595 "name": "Nvme$subsystem", 00:25:19.595 "trtype": "$TEST_TRANSPORT", 00:25:19.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.595 "adrfam": "ipv4", 00:25:19.595 "trsvcid": "$NVMF_PORT", 00:25:19.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.595 "hdgst": ${hdgst:-false}, 00:25:19.595 "ddgst": ${ddgst:-false} 00:25:19.595 }, 00:25:19.595 "method": "bdev_nvme_attach_controller" 00:25:19.595 } 00:25:19.595 EOF 00:25:19.595 )") 00:25:19.595 07:18:01 -- nvmf/common.sh@542 -- # cat 00:25:19.595 07:18:01 -- nvmf/common.sh@544 -- # jq . 00:25:19.595 07:18:01 -- nvmf/common.sh@545 -- # IFS=, 00:25:19.595 07:18:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:19.595 "params": { 00:25:19.595 "name": "Nvme0", 00:25:19.595 "trtype": "tcp", 00:25:19.595 "traddr": "10.0.0.2", 00:25:19.595 "adrfam": "ipv4", 00:25:19.595 "trsvcid": "4420", 00:25:19.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:19.595 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:19.595 "hdgst": false, 00:25:19.595 "ddgst": false 00:25:19.595 }, 00:25:19.595 "method": "bdev_nvme_attach_controller" 00:25:19.595 },{ 00:25:19.595 "params": { 00:25:19.595 "name": "Nvme1", 00:25:19.595 "trtype": "tcp", 00:25:19.595 "traddr": "10.0.0.2", 00:25:19.595 "adrfam": "ipv4", 00:25:19.595 "trsvcid": "4420", 00:25:19.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:19.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:19.595 "hdgst": false, 00:25:19.595 "ddgst": false 00:25:19.595 }, 00:25:19.595 "method": "bdev_nvme_attach_controller" 00:25:19.595 }' 00:25:19.595 07:18:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:19.595 07:18:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:19.595 07:18:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:19.595 07:18:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:19.595 07:18:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:19.595 07:18:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:19.595 07:18:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:19.595 07:18:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:19.595 07:18:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:19.595 07:18:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:19.595 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:19.595 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:19.595 fio-3.35 00:25:19.595 Starting 2 threads 00:25:19.595 [2024-07-11 07:18:02.539805] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:19.595 [2024-07-11 07:18:02.539850] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:29.566 00:25:29.566 filename0: (groupid=0, jobs=1): err= 0: pid=90819: Thu Jul 11 07:18:12 2024 00:25:29.566 read: IOPS=594, BW=2377KiB/s (2434kB/s)(23.2MiB/10009msec) 00:25:29.566 slat (nsec): min=5906, max=34103, avg=7551.31, stdev=3064.99 00:25:29.566 clat (usec): min=371, max=41596, avg=6708.35, stdev=14665.25 00:25:29.566 lat (usec): min=378, max=41605, avg=6715.90, stdev=14665.25 00:25:29.566 clat percentiles (usec): 00:25:29.566 | 1.00th=[ 379], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[ 396], 00:25:29.566 | 30.00th=[ 404], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 429], 00:25:29.566 | 70.00th=[ 441], 80.00th=[ 469], 90.00th=[40633], 95.00th=[41157], 00:25:29.566 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:25:29.566 | 99.99th=[41681] 00:25:29.566 bw ( KiB/s): min= 1760, max= 3648, per=54.11%, avg=2325.63, stdev=484.50, samples=19 00:25:29.566 iops : min= 440, max= 912, avg=581.37, stdev=121.13, samples=19 00:25:29.566 lat (usec) : 500=82.20%, 750=2.14%, 1000=0.07% 00:25:29.566 lat (msec) : 2=0.07%, 50=15.53% 00:25:29.566 cpu : usr=95.61%, sys=3.88%, ctx=6, majf=0, minf=0 00:25:29.566 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:29.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.566 issued rwts: total=5948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.566 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:29.566 filename1: (groupid=0, jobs=1): err= 0: pid=90820: Thu Jul 11 07:18:12 2024 00:25:29.566 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10016msec) 00:25:29.566 slat (nsec): min=5863, max=41532, avg=7658.03, stdev=3202.07 00:25:29.566 clat (usec): min=374, max=42022, avg=8301.66, stdev=16039.98 00:25:29.566 lat (usec): min=381, max=42032, avg=8309.32, stdev=16040.09 00:25:29.566 clat percentiles (usec): 00:25:29.566 | 1.00th=[ 383], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[ 400], 00:25:29.566 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 433], 00:25:29.566 | 70.00th=[ 449], 80.00th=[ 676], 90.00th=[41157], 95.00th=[41157], 00:25:29.566 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:25:29.566 | 99.99th=[42206] 00:25:29.566 bw ( KiB/s): min= 1056, max= 3136, per=44.75%, avg=1923.00, stdev=565.43, samples=20 00:25:29.567 iops : min= 264, max= 784, avg=480.75, stdev=141.36, samples=20 00:25:29.567 lat (usec) : 500=78.28%, 750=2.18% 00:25:29.567 lat (msec) : 2=0.08%, 50=19.45% 00:25:29.567 cpu : usr=95.78%, sys=3.74%, ctx=13, majf=0, minf=0 00:25:29.567 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:29.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.567 issued rwts: total=4812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.567 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:29.567 00:25:29.567 Run status group 0 (all jobs): 00:25:29.567 READ: bw=4297KiB/s (4400kB/s), 1922KiB/s-2377KiB/s (1968kB/s-2434kB/s), io=42.0MiB (44.1MB), run=10009-10016msec 00:25:29.567 07:18:12 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:29.567 07:18:12 -- target/dif.sh@43 -- # local sub 00:25:29.567 07:18:12 -- target/dif.sh@45 -- # for sub in "$@" 00:25:29.567 07:18:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:29.567 07:18:12 -- target/dif.sh@36 -- # local sub_id=0 00:25:29.567 07:18:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:29.567 07:18:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.567 07:18:12 -- common/autotest_common.sh@10 -- # set +x 00:25:29.567 07:18:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.567 07:18:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:29.567 07:18:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.567 07:18:12 -- common/autotest_common.sh@10 -- # set +x 00:25:29.567 07:18:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.567 07:18:12 -- target/dif.sh@45 -- # for sub in "$@" 00:25:29.567 07:18:12 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:29.567 07:18:12 -- target/dif.sh@36 -- # local sub_id=1 00:25:29.567 07:18:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.567 07:18:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.567 07:18:12 -- common/autotest_common.sh@10 -- # set +x 00:25:29.567 07:18:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.567 07:18:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:29.567 07:18:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.567 07:18:12 -- common/autotest_common.sh@10 -- # set +x 00:25:29.567 ************************************ 00:25:29.567 END TEST fio_dif_1_multi_subsystems 00:25:29.567 ************************************ 00:25:29.567 07:18:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.567 00:25:29.567 real 0m11.180s 00:25:29.567 user 0m19.947s 00:25:29.567 sys 0m1.046s 00:25:29.567 07:18:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.567 07:18:12 -- common/autotest_common.sh@10 -- # set +x 00:25:29.567 07:18:12 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:29.567 07:18:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:29.567 07:18:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:29.567 07:18:12 -- common/autotest_common.sh@10 -- # set +x 00:25:29.567 ************************************ 00:25:29.567 START TEST fio_dif_rand_params 00:25:29.567 ************************************ 00:25:29.567 07:18:12 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:25:29.567 07:18:12 -- target/dif.sh@100 -- # local NULL_DIF 00:25:29.567 07:18:12 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:29.567 07:18:12 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:29.567 07:18:12 -- target/dif.sh@103 -- # bs=128k 00:25:29.567 07:18:12 -- target/dif.sh@103 -- # numjobs=3 00:25:29.567 07:18:12 -- target/dif.sh@103 -- # iodepth=3 00:25:29.567 07:18:12 -- target/dif.sh@103 -- # runtime=5 00:25:29.567 07:18:12 -- target/dif.sh@105 -- # create_subsystems 0 00:25:29.567 07:18:12 -- target/dif.sh@28 -- # local sub 00:25:29.567 07:18:12 -- target/dif.sh@30 -- # for sub in "$@" 00:25:29.567 07:18:12 -- target/dif.sh@31 -- # create_subsystem 0 00:25:29.567 07:18:12 -- target/dif.sh@18 -- # local sub_id=0 00:25:29.567 07:18:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:29.567 07:18:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.567 07:18:12 -- common/autotest_common.sh@10 -- # set +x 00:25:29.567 bdev_null0 00:25:29.567 07:18:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.567 07:18:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:29.567 07:18:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.567 07:18:12 -- common/autotest_common.sh@10 -- # set +x 00:25:29.567 07:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.567 07:18:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:29.567 07:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.567 07:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:29.567 07:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.567 07:18:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:29.567 07:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.567 07:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:29.567 [2024-07-11 07:18:13.018558] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.567 07:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.567 07:18:13 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:29.567 07:18:13 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:29.567 07:18:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:29.567 07:18:13 -- nvmf/common.sh@520 -- # config=() 00:25:29.567 07:18:13 -- nvmf/common.sh@520 -- # local subsystem config 00:25:29.567 07:18:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:29.567 07:18:13 -- target/dif.sh@82 -- # gen_fio_conf 00:25:29.567 07:18:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:29.567 07:18:13 -- target/dif.sh@54 -- # local file 00:25:29.567 07:18:13 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:29.567 07:18:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:29.567 { 00:25:29.567 "params": { 00:25:29.567 "name": "Nvme$subsystem", 00:25:29.567 "trtype": "$TEST_TRANSPORT", 00:25:29.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.567 "adrfam": "ipv4", 00:25:29.567 "trsvcid": "$NVMF_PORT", 00:25:29.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.567 "hdgst": ${hdgst:-false}, 00:25:29.567 "ddgst": ${ddgst:-false} 00:25:29.567 }, 00:25:29.567 "method": "bdev_nvme_attach_controller" 00:25:29.567 } 00:25:29.567 EOF 00:25:29.567 )") 00:25:29.567 07:18:13 -- target/dif.sh@56 -- # cat 00:25:29.567 07:18:13 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:29.567 07:18:13 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:29.567 07:18:13 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:29.567 07:18:13 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:29.567 07:18:13 -- common/autotest_common.sh@1320 -- # shift 00:25:29.567 07:18:13 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:29.567 07:18:13 -- nvmf/common.sh@542 -- # cat 00:25:29.567 07:18:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.567 07:18:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:29.567 07:18:13 -- target/dif.sh@72 -- # (( file <= files )) 00:25:29.567 07:18:13 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:29.567 07:18:13 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:29.567 07:18:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:29.567 07:18:13 -- nvmf/common.sh@544 -- # jq . 00:25:29.567 07:18:13 -- nvmf/common.sh@545 -- # IFS=, 00:25:29.567 07:18:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:29.567 "params": { 00:25:29.567 "name": "Nvme0", 00:25:29.567 "trtype": "tcp", 00:25:29.567 "traddr": "10.0.0.2", 00:25:29.567 "adrfam": "ipv4", 00:25:29.567 "trsvcid": "4420", 00:25:29.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:29.567 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:29.567 "hdgst": false, 00:25:29.567 "ddgst": false 00:25:29.567 }, 00:25:29.567 "method": "bdev_nvme_attach_controller" 00:25:29.567 }' 00:25:29.567 07:18:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:29.567 07:18:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:29.567 07:18:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.567 07:18:13 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:29.567 07:18:13 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:29.567 07:18:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:29.567 07:18:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:29.567 07:18:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:29.567 07:18:13 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:29.567 07:18:13 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:29.567 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:29.567 ... 00:25:29.567 fio-3.35 00:25:29.567 Starting 3 threads 00:25:29.825 [2024-07-11 07:18:13.640079] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:29.825 [2024-07-11 07:18:13.640144] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:35.089 00:25:35.089 filename0: (groupid=0, jobs=1): err= 0: pid=90971: Thu Jul 11 07:18:18 2024 00:25:35.089 read: IOPS=266, BW=33.3MiB/s (35.0MB/s)(167MiB/5004msec) 00:25:35.089 slat (nsec): min=5972, max=57910, avg=12448.89, stdev=5616.45 00:25:35.089 clat (usec): min=4513, max=51504, avg=11225.53, stdev=10510.49 00:25:35.089 lat (usec): min=4534, max=51516, avg=11237.98, stdev=10510.46 00:25:35.089 clat percentiles (usec): 00:25:35.089 | 1.00th=[ 5276], 5.00th=[ 6063], 10.00th=[ 6390], 20.00th=[ 6980], 00:25:35.089 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 8979], 00:25:35.089 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[10159], 95.00th=[47449], 00:25:35.089 | 99.00th=[50070], 99.50th=[51119], 99.90th=[51119], 99.95th=[51643], 00:25:35.089 | 99.99th=[51643] 00:25:35.089 bw ( KiB/s): min=24832, max=43776, per=31.13%, avg=34787.56, stdev=5800.31, samples=9 00:25:35.089 iops : min= 194, max= 342, avg=271.78, stdev=45.31, samples=9 00:25:35.089 lat (msec) : 10=88.91%, 20=3.90%, 50=5.99%, 100=1.20% 00:25:35.089 cpu : usr=93.80%, sys=4.64%, ctx=7, majf=0, minf=9 00:25:35.089 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:35.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.089 issued rwts: total=1335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.089 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:35.089 filename0: (groupid=0, jobs=1): err= 0: pid=90972: Thu Jul 11 07:18:18 2024 00:25:35.089 read: IOPS=341, BW=42.6MiB/s (44.7MB/s)(213MiB/5004msec) 00:25:35.089 slat (nsec): min=6064, max=60616, avg=10558.62, stdev=6418.63 00:25:35.089 clat (usec): min=3349, max=50159, avg=8766.95, stdev=3996.34 00:25:35.089 lat (usec): min=3357, max=50165, avg=8777.51, stdev=3997.35 00:25:35.089 clat percentiles (usec): 00:25:35.089 | 1.00th=[ 3392], 5.00th=[ 3392], 10.00th=[ 3425], 20.00th=[ 3589], 00:25:35.089 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 7963], 60.00th=[10945], 00:25:35.089 | 70.00th=[12125], 80.00th=[12649], 90.00th=[13173], 95.00th=[13304], 00:25:35.089 | 99.00th=[13698], 99.50th=[13829], 99.90th=[47973], 99.95th=[50070], 00:25:35.089 | 99.99th=[50070] 00:25:35.089 bw ( KiB/s): min=29242, max=67584, per=39.04%, avg=43628.20, stdev=11699.97, samples=10 00:25:35.089 iops : min= 228, max= 528, avg=340.80, stdev=91.47, samples=10 00:25:35.089 lat (msec) : 4=21.27%, 10=37.08%, 20=41.48%, 50=0.12%, 100=0.06% 00:25:35.089 cpu : usr=94.02%, sys=4.52%, ctx=4, majf=0, minf=9 00:25:35.089 IO depths : 1=32.3%, 2=67.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:35.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.089 issued rwts: total=1707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.089 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:35.089 filename0: (groupid=0, jobs=1): err= 0: pid=90973: Thu Jul 11 07:18:18 2024 00:25:35.089 read: IOPS=265, BW=33.1MiB/s (34.8MB/s)(166MiB/5004msec) 00:25:35.089 slat (usec): min=5, max=339, avg=12.35, stdev=11.28 00:25:35.089 clat (usec): min=3606, max=51935, avg=11292.79, stdev=9945.79 00:25:35.089 lat (usec): min=3627, max=51944, avg=11305.14, stdev=9945.77 00:25:35.089 clat percentiles (usec): 00:25:35.089 | 1.00th=[ 3687], 5.00th=[ 5538], 10.00th=[ 6194], 20.00th=[ 6652], 00:25:35.089 | 30.00th=[ 8356], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:25:35.089 | 70.00th=[10159], 80.00th=[10421], 90.00th=[11076], 95.00th=[46924], 00:25:35.089 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51643], 99.95th=[52167], 00:25:35.089 | 99.99th=[52167] 00:25:35.089 bw ( KiB/s): min=18944, max=41984, per=30.36%, avg=33927.80, stdev=7053.86, samples=10 00:25:35.089 iops : min= 148, max= 328, avg=265.00, stdev=55.06, samples=10 00:25:35.089 lat (msec) : 4=1.88%, 10=63.90%, 20=27.88%, 50=3.99%, 100=2.34% 00:25:35.089 cpu : usr=93.50%, sys=4.62%, ctx=111, majf=0, minf=9 00:25:35.089 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:35.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.089 issued rwts: total=1327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.089 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:35.089 00:25:35.089 Run status group 0 (all jobs): 00:25:35.089 READ: bw=109MiB/s (114MB/s), 33.1MiB/s-42.6MiB/s (34.8MB/s-44.7MB/s), io=546MiB (573MB), run=5004-5004msec 00:25:35.089 07:18:18 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:35.089 07:18:18 -- target/dif.sh@43 -- # local sub 00:25:35.089 07:18:18 -- target/dif.sh@45 -- # for sub in "$@" 00:25:35.089 07:18:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:35.089 07:18:18 -- target/dif.sh@36 -- # local sub_id=0 00:25:35.089 07:18:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:35.089 07:18:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.089 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:25:35.089 07:18:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.089 07:18:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:35.089 07:18:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.089 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:25:35.089 07:18:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.089 07:18:18 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:35.089 07:18:18 -- target/dif.sh@109 -- # bs=4k 00:25:35.089 07:18:18 -- target/dif.sh@109 -- # numjobs=8 00:25:35.089 07:18:18 -- target/dif.sh@109 -- # iodepth=16 00:25:35.089 07:18:18 -- target/dif.sh@109 -- # runtime= 00:25:35.089 07:18:18 -- target/dif.sh@109 -- # files=2 00:25:35.089 07:18:18 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:35.089 07:18:18 -- target/dif.sh@28 -- # local sub 00:25:35.089 07:18:18 -- target/dif.sh@30 -- # for sub in "$@" 00:25:35.089 07:18:18 -- target/dif.sh@31 -- # create_subsystem 0 00:25:35.089 07:18:18 -- target/dif.sh@18 -- # local sub_id=0 00:25:35.089 07:18:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:35.089 07:18:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.089 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:25:35.089 bdev_null0 00:25:35.089 07:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.089 07:18:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:35.089 07:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.089 07:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:35.089 07:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.089 07:18:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:35.089 07:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.089 07:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:35.089 07:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.089 07:18:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:35.089 07:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.089 07:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:35.089 [2024-07-11 07:18:19.028103] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.089 07:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.089 07:18:19 -- target/dif.sh@30 -- # for sub in "$@" 00:25:35.089 07:18:19 -- target/dif.sh@31 -- # create_subsystem 1 00:25:35.089 07:18:19 -- target/dif.sh@18 -- # local sub_id=1 00:25:35.089 07:18:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:35.089 07:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.089 07:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:35.089 bdev_null1 00:25:35.089 07:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.089 07:18:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:35.089 07:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.089 07:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:35.089 07:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.089 07:18:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:35.089 07:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.090 07:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:35.090 07:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.090 07:18:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.090 07:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.090 07:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:35.090 07:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.090 07:18:19 -- target/dif.sh@30 -- # for sub in "$@" 00:25:35.090 07:18:19 -- target/dif.sh@31 -- # create_subsystem 2 00:25:35.090 07:18:19 -- target/dif.sh@18 -- # local sub_id=2 00:25:35.090 07:18:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:35.090 07:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.090 07:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:35.090 bdev_null2 00:25:35.090 07:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.090 07:18:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:35.090 07:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.090 07:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:35.090 07:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.090 07:18:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:35.090 07:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.090 07:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:35.090 07:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.090 07:18:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:35.090 07:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.090 07:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:35.090 07:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.090 07:18:19 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:35.090 07:18:19 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:35.090 07:18:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:35.090 07:18:19 -- nvmf/common.sh@520 -- # config=() 00:25:35.090 07:18:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:35.090 07:18:19 -- nvmf/common.sh@520 -- # local subsystem config 00:25:35.090 07:18:19 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:35.090 07:18:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:35.090 07:18:19 -- target/dif.sh@82 -- # gen_fio_conf 00:25:35.090 07:18:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:35.090 07:18:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:35.090 { 00:25:35.090 "params": { 00:25:35.090 "name": "Nvme$subsystem", 00:25:35.090 "trtype": "$TEST_TRANSPORT", 00:25:35.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.090 "adrfam": "ipv4", 00:25:35.090 "trsvcid": "$NVMF_PORT", 00:25:35.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.090 "hdgst": ${hdgst:-false}, 00:25:35.090 "ddgst": ${ddgst:-false} 00:25:35.090 }, 00:25:35.090 "method": "bdev_nvme_attach_controller" 00:25:35.090 } 00:25:35.090 EOF 00:25:35.090 )") 00:25:35.090 07:18:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:35.090 07:18:19 -- target/dif.sh@54 -- # local file 00:25:35.090 07:18:19 -- target/dif.sh@56 -- # cat 00:25:35.090 07:18:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:35.090 07:18:19 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:35.090 07:18:19 -- common/autotest_common.sh@1320 -- # shift 00:25:35.090 07:18:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:35.090 07:18:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:35.090 07:18:19 -- nvmf/common.sh@542 -- # cat 00:25:35.090 07:18:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:35.090 07:18:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:35.090 07:18:19 -- target/dif.sh@72 -- # (( file <= files )) 00:25:35.090 07:18:19 -- target/dif.sh@73 -- # cat 00:25:35.090 07:18:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:35.090 07:18:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:35.090 07:18:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:35.090 07:18:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:35.090 { 00:25:35.090 "params": { 00:25:35.090 "name": "Nvme$subsystem", 00:25:35.090 "trtype": "$TEST_TRANSPORT", 00:25:35.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.090 "adrfam": "ipv4", 00:25:35.090 "trsvcid": "$NVMF_PORT", 00:25:35.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.090 "hdgst": ${hdgst:-false}, 00:25:35.090 "ddgst": ${ddgst:-false} 00:25:35.090 }, 00:25:35.090 "method": "bdev_nvme_attach_controller" 00:25:35.090 } 00:25:35.090 EOF 00:25:35.090 )") 00:25:35.090 07:18:19 -- target/dif.sh@72 -- # (( file++ )) 00:25:35.090 07:18:19 -- target/dif.sh@72 -- # (( file <= files )) 00:25:35.090 07:18:19 -- nvmf/common.sh@542 -- # cat 00:25:35.090 07:18:19 -- target/dif.sh@73 -- # cat 00:25:35.090 07:18:19 -- target/dif.sh@72 -- # (( file++ )) 00:25:35.090 07:18:19 -- target/dif.sh@72 -- # (( file <= files )) 00:25:35.090 07:18:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:35.090 07:18:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:35.090 { 00:25:35.090 "params": { 00:25:35.090 "name": "Nvme$subsystem", 00:25:35.090 "trtype": "$TEST_TRANSPORT", 00:25:35.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:35.090 "adrfam": "ipv4", 00:25:35.090 "trsvcid": "$NVMF_PORT", 00:25:35.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:35.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:35.090 "hdgst": ${hdgst:-false}, 00:25:35.090 "ddgst": ${ddgst:-false} 00:25:35.090 }, 00:25:35.090 "method": "bdev_nvme_attach_controller" 00:25:35.090 } 00:25:35.090 EOF 00:25:35.090 )") 00:25:35.090 07:18:19 -- nvmf/common.sh@542 -- # cat 00:25:35.090 07:18:19 -- nvmf/common.sh@544 -- # jq . 00:25:35.090 07:18:19 -- nvmf/common.sh@545 -- # IFS=, 00:25:35.090 07:18:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:35.090 "params": { 00:25:35.090 "name": "Nvme0", 00:25:35.090 "trtype": "tcp", 00:25:35.090 "traddr": "10.0.0.2", 00:25:35.090 "adrfam": "ipv4", 00:25:35.090 "trsvcid": "4420", 00:25:35.090 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:35.090 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:35.090 "hdgst": false, 00:25:35.090 "ddgst": false 00:25:35.090 }, 00:25:35.090 "method": "bdev_nvme_attach_controller" 00:25:35.090 },{ 00:25:35.090 "params": { 00:25:35.090 "name": "Nvme1", 00:25:35.090 "trtype": "tcp", 00:25:35.090 "traddr": "10.0.0.2", 00:25:35.090 "adrfam": "ipv4", 00:25:35.090 "trsvcid": "4420", 00:25:35.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:35.090 "hdgst": false, 00:25:35.090 "ddgst": false 00:25:35.090 }, 00:25:35.090 "method": "bdev_nvme_attach_controller" 00:25:35.090 },{ 00:25:35.090 "params": { 00:25:35.090 "name": "Nvme2", 00:25:35.090 "trtype": "tcp", 00:25:35.090 "traddr": "10.0.0.2", 00:25:35.090 "adrfam": "ipv4", 00:25:35.090 "trsvcid": "4420", 00:25:35.090 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:35.090 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:35.090 "hdgst": false, 00:25:35.090 "ddgst": false 00:25:35.090 }, 00:25:35.090 "method": "bdev_nvme_attach_controller" 00:25:35.090 }' 00:25:35.090 07:18:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:35.090 07:18:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:35.090 07:18:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:35.090 07:18:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:35.090 07:18:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:35.090 07:18:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:35.350 07:18:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:35.350 07:18:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:35.350 07:18:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:35.350 07:18:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:35.350 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:35.350 ... 00:25:35.350 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:35.350 ... 00:25:35.350 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:35.350 ... 00:25:35.350 fio-3.35 00:25:35.350 Starting 24 threads 00:25:35.917 [2024-07-11 07:18:19.911768] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:35.917 [2024-07-11 07:18:19.911820] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:48.133 00:25:48.133 filename0: (groupid=0, jobs=1): err= 0: pid=91074: Thu Jul 11 07:18:30 2024 00:25:48.133 read: IOPS=282, BW=1132KiB/s (1159kB/s)(11.1MiB/10035msec) 00:25:48.133 slat (usec): min=3, max=12011, avg=27.79, stdev=359.42 00:25:48.133 clat (msec): min=9, max=120, avg=56.30, stdev=18.29 00:25:48.133 lat (msec): min=9, max=120, avg=56.33, stdev=18.30 00:25:48.133 clat percentiles (msec): 00:25:48.133 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 39], 00:25:48.133 | 30.00th=[ 47], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 61], 00:25:48.133 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 89], 00:25:48.133 | 99.00th=[ 102], 99.50th=[ 107], 99.90th=[ 121], 99.95th=[ 121], 00:25:48.133 | 99.99th=[ 121] 00:25:48.133 bw ( KiB/s): min= 864, max= 2048, per=4.32%, avg=1131.70, stdev=243.31, samples=20 00:25:48.133 iops : min= 216, max= 512, avg=282.90, stdev=60.81, samples=20 00:25:48.133 lat (msec) : 10=0.56%, 20=1.09%, 50=34.55%, 100=62.45%, 250=1.34% 00:25:48.133 cpu : usr=39.57%, sys=0.65%, ctx=1170, majf=0, minf=9 00:25:48.133 IO depths : 1=1.1%, 2=2.4%, 4=9.9%, 8=74.4%, 16=12.2%, 32=0.0%, >=64=0.0% 00:25:48.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 issued rwts: total=2839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.133 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.133 filename0: (groupid=0, jobs=1): err= 0: pid=91075: Thu Jul 11 07:18:30 2024 00:25:48.133 read: IOPS=301, BW=1205KiB/s (1234kB/s)(11.8MiB/10048msec) 00:25:48.133 slat (nsec): min=4785, max=80122, avg=11227.53, stdev=6726.06 00:25:48.133 clat (msec): min=6, max=120, avg=53.02, stdev=18.27 00:25:48.133 lat (msec): min=6, max=120, avg=53.04, stdev=18.27 00:25:48.133 clat percentiles (msec): 00:25:48.133 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 39], 00:25:48.133 | 30.00th=[ 43], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 59], 00:25:48.133 | 70.00th=[ 61], 80.00th=[ 70], 90.00th=[ 74], 95.00th=[ 84], 00:25:48.133 | 99.00th=[ 101], 99.50th=[ 108], 99.90th=[ 122], 99.95th=[ 122], 00:25:48.133 | 99.99th=[ 122] 00:25:48.133 bw ( KiB/s): min= 864, max= 1968, per=4.59%, avg=1204.05, stdev=247.25, samples=20 00:25:48.133 iops : min= 216, max= 492, avg=301.00, stdev=61.79, samples=20 00:25:48.133 lat (msec) : 10=1.59%, 20=1.32%, 50=45.56%, 100=50.55%, 250=0.99% 00:25:48.133 cpu : usr=39.31%, sys=0.78%, ctx=1029, majf=0, minf=9 00:25:48.133 IO depths : 1=0.7%, 2=1.5%, 4=7.0%, 8=77.8%, 16=13.0%, 32=0.0%, >=64=0.0% 00:25:48.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 complete : 0=0.0%, 4=89.3%, 8=6.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 issued rwts: total=3027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.133 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.133 filename0: (groupid=0, jobs=1): err= 0: pid=91076: Thu Jul 11 07:18:30 2024 00:25:48.133 read: IOPS=277, BW=1108KiB/s (1135kB/s)(10.9MiB/10025msec) 00:25:48.133 slat (usec): min=6, max=8029, avg=21.15, stdev=240.35 00:25:48.133 clat (msec): min=20, max=154, avg=57.56, stdev=18.29 00:25:48.133 lat (msec): min=20, max=154, avg=57.58, stdev=18.29 00:25:48.133 clat percentiles (msec): 00:25:48.133 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 40], 00:25:48.133 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 61], 00:25:48.133 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 91], 00:25:48.133 | 99.00th=[ 106], 99.50th=[ 120], 99.90th=[ 155], 99.95th=[ 155], 00:25:48.133 | 99.99th=[ 155] 00:25:48.133 bw ( KiB/s): min= 728, max= 1680, per=4.22%, avg=1107.00, stdev=200.85, samples=20 00:25:48.133 iops : min= 182, max= 420, avg=276.70, stdev=50.23, samples=20 00:25:48.133 lat (msec) : 50=38.70%, 100=59.61%, 250=1.69% 00:25:48.133 cpu : usr=36.55%, sys=0.55%, ctx=996, majf=0, minf=9 00:25:48.133 IO depths : 1=1.2%, 2=2.6%, 4=9.9%, 8=74.3%, 16=12.1%, 32=0.0%, >=64=0.0% 00:25:48.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 issued rwts: total=2778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.133 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.133 filename0: (groupid=0, jobs=1): err= 0: pid=91077: Thu Jul 11 07:18:30 2024 00:25:48.133 read: IOPS=272, BW=1091KiB/s (1117kB/s)(10.7MiB/10046msec) 00:25:48.133 slat (usec): min=4, max=4034, avg=16.04, stdev=108.88 00:25:48.133 clat (msec): min=20, max=142, avg=58.51, stdev=19.04 00:25:48.133 lat (msec): min=20, max=142, avg=58.53, stdev=19.04 00:25:48.133 clat percentiles (msec): 00:25:48.133 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 43], 00:25:48.133 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 61], 00:25:48.133 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 95], 00:25:48.133 | 99.00th=[ 108], 99.50th=[ 122], 99.90th=[ 142], 99.95th=[ 142], 00:25:48.133 | 99.99th=[ 142] 00:25:48.133 bw ( KiB/s): min= 720, max= 1808, per=4.16%, avg=1091.30, stdev=216.22, samples=20 00:25:48.133 iops : min= 180, max= 452, avg=272.80, stdev=54.05, samples=20 00:25:48.133 lat (msec) : 50=38.03%, 100=60.29%, 250=1.68% 00:25:48.133 cpu : usr=35.00%, sys=0.55%, ctx=912, majf=0, minf=9 00:25:48.133 IO depths : 1=0.7%, 2=1.5%, 4=7.7%, 8=77.0%, 16=13.1%, 32=0.0%, >=64=0.0% 00:25:48.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 complete : 0=0.0%, 4=89.7%, 8=6.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 issued rwts: total=2740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.133 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.133 filename0: (groupid=0, jobs=1): err= 0: pid=91078: Thu Jul 11 07:18:30 2024 00:25:48.133 read: IOPS=282, BW=1129KiB/s (1156kB/s)(11.1MiB/10031msec) 00:25:48.133 slat (usec): min=5, max=8029, avg=19.58, stdev=226.05 00:25:48.133 clat (msec): min=21, max=142, avg=56.47, stdev=18.13 00:25:48.133 lat (msec): min=21, max=142, avg=56.49, stdev=18.13 00:25:48.133 clat percentiles (msec): 00:25:48.133 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 40], 00:25:48.133 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 61], 00:25:48.133 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 91], 00:25:48.133 | 99.00th=[ 96], 99.50th=[ 106], 99.90th=[ 144], 99.95th=[ 144], 00:25:48.133 | 99.99th=[ 144] 00:25:48.133 bw ( KiB/s): min= 864, max= 1848, per=4.31%, avg=1129.05, stdev=233.25, samples=20 00:25:48.133 iops : min= 216, max= 462, avg=282.20, stdev=58.26, samples=20 00:25:48.133 lat (msec) : 50=41.49%, 100=57.84%, 250=0.67% 00:25:48.133 cpu : usr=34.02%, sys=0.45%, ctx=903, majf=0, minf=9 00:25:48.133 IO depths : 1=1.1%, 2=2.3%, 4=9.5%, 8=74.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:25:48.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 complete : 0=0.0%, 4=89.8%, 8=5.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 issued rwts: total=2832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.133 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.133 filename0: (groupid=0, jobs=1): err= 0: pid=91079: Thu Jul 11 07:18:30 2024 00:25:48.133 read: IOPS=324, BW=1299KiB/s (1330kB/s)(12.8MiB/10052msec) 00:25:48.133 slat (usec): min=5, max=8017, avg=16.05, stdev=186.28 00:25:48.133 clat (msec): min=3, max=103, avg=49.05, stdev=15.61 00:25:48.133 lat (msec): min=3, max=103, avg=49.07, stdev=15.61 00:25:48.133 clat percentiles (msec): 00:25:48.133 | 1.00th=[ 14], 5.00th=[ 26], 10.00th=[ 32], 20.00th=[ 36], 00:25:48.133 | 30.00th=[ 40], 40.00th=[ 44], 50.00th=[ 47], 60.00th=[ 53], 00:25:48.133 | 70.00th=[ 58], 80.00th=[ 62], 90.00th=[ 71], 95.00th=[ 77], 00:25:48.133 | 99.00th=[ 85], 99.50th=[ 94], 99.90th=[ 105], 99.95th=[ 105], 00:25:48.133 | 99.99th=[ 105] 00:25:48.133 bw ( KiB/s): min= 976, max= 1916, per=4.97%, avg=1302.65, stdev=214.99, samples=20 00:25:48.133 iops : min= 244, max= 479, avg=325.65, stdev=53.76, samples=20 00:25:48.133 lat (msec) : 4=0.49%, 20=1.47%, 50=55.42%, 100=42.34%, 250=0.28% 00:25:48.133 cpu : usr=42.84%, sys=0.63%, ctx=1120, majf=0, minf=9 00:25:48.133 IO depths : 1=0.5%, 2=1.3%, 4=7.7%, 8=77.6%, 16=12.9%, 32=0.0%, >=64=0.0% 00:25:48.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 issued rwts: total=3264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.133 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.133 filename0: (groupid=0, jobs=1): err= 0: pid=91080: Thu Jul 11 07:18:30 2024 00:25:48.133 read: IOPS=286, BW=1148KiB/s (1175kB/s)(11.2MiB/10022msec) 00:25:48.133 slat (usec): min=6, max=10061, avg=20.99, stdev=249.59 00:25:48.133 clat (msec): min=15, max=129, avg=55.61, stdev=16.90 00:25:48.133 lat (msec): min=15, max=129, avg=55.63, stdev=16.90 00:25:48.133 clat percentiles (msec): 00:25:48.133 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 42], 00:25:48.133 | 30.00th=[ 46], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 59], 00:25:48.133 | 70.00th=[ 62], 80.00th=[ 67], 90.00th=[ 78], 95.00th=[ 90], 00:25:48.133 | 99.00th=[ 103], 99.50th=[ 107], 99.90th=[ 130], 99.95th=[ 130], 00:25:48.133 | 99.99th=[ 130] 00:25:48.133 bw ( KiB/s): min= 768, max= 1776, per=4.36%, avg=1143.65, stdev=218.01, samples=20 00:25:48.133 iops : min= 192, max= 444, avg=285.90, stdev=54.51, samples=20 00:25:48.133 lat (msec) : 20=0.76%, 50=35.95%, 100=61.96%, 250=1.32% 00:25:48.133 cpu : usr=42.91%, sys=0.77%, ctx=1560, majf=0, minf=9 00:25:48.133 IO depths : 1=1.0%, 2=2.3%, 4=9.5%, 8=74.2%, 16=12.9%, 32=0.0%, >=64=0.0% 00:25:48.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 complete : 0=0.0%, 4=89.8%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.133 issued rwts: total=2876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.133 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.134 filename0: (groupid=0, jobs=1): err= 0: pid=91081: Thu Jul 11 07:18:30 2024 00:25:48.134 read: IOPS=272, BW=1091KiB/s (1118kB/s)(10.7MiB/10046msec) 00:25:48.134 slat (usec): min=4, max=8021, avg=18.57, stdev=216.35 00:25:48.134 clat (msec): min=20, max=142, avg=58.54, stdev=17.54 00:25:48.134 lat (msec): min=20, max=142, avg=58.56, stdev=17.55 00:25:48.134 clat percentiles (msec): 00:25:48.134 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 46], 00:25:48.134 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 61], 00:25:48.134 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 93], 00:25:48.134 | 99.00th=[ 104], 99.50th=[ 108], 99.90th=[ 144], 99.95th=[ 144], 00:25:48.134 | 99.99th=[ 144] 00:25:48.134 bw ( KiB/s): min= 896, max= 1712, per=4.16%, avg=1090.10, stdev=198.13, samples=20 00:25:48.134 iops : min= 224, max= 428, avg=272.50, stdev=49.52, samples=20 00:25:48.134 lat (msec) : 50=36.08%, 100=62.64%, 250=1.28% 00:25:48.134 cpu : usr=32.82%, sys=0.46%, ctx=897, majf=0, minf=9 00:25:48.134 IO depths : 1=0.6%, 2=1.4%, 4=8.0%, 8=76.7%, 16=13.2%, 32=0.0%, >=64=0.0% 00:25:48.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 complete : 0=0.0%, 4=89.4%, 8=6.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 issued rwts: total=2741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.134 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.134 filename1: (groupid=0, jobs=1): err= 0: pid=91082: Thu Jul 11 07:18:30 2024 00:25:48.134 read: IOPS=254, BW=1017KiB/s (1041kB/s)(9.95MiB/10013msec) 00:25:48.134 slat (usec): min=3, max=4016, avg=17.65, stdev=125.57 00:25:48.134 clat (msec): min=14, max=125, avg=62.80, stdev=18.52 00:25:48.134 lat (msec): min=14, max=125, avg=62.82, stdev=18.52 00:25:48.134 clat percentiles (msec): 00:25:48.134 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 48], 00:25:48.134 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 65], 00:25:48.134 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 90], 95.00th=[ 99], 00:25:48.134 | 99.00th=[ 118], 99.50th=[ 118], 99.90th=[ 120], 99.95th=[ 120], 00:25:48.134 | 99.99th=[ 127] 00:25:48.134 bw ( KiB/s): min= 768, max= 1624, per=3.86%, avg=1012.00, stdev=188.54, samples=20 00:25:48.134 iops : min= 192, max= 406, avg=253.00, stdev=47.13, samples=20 00:25:48.134 lat (msec) : 20=0.63%, 50=22.03%, 100=74.16%, 250=3.18% 00:25:48.134 cpu : usr=43.04%, sys=0.78%, ctx=1287, majf=0, minf=9 00:25:48.134 IO depths : 1=1.5%, 2=3.2%, 4=11.1%, 8=71.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:25:48.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 complete : 0=0.0%, 4=90.4%, 8=5.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 issued rwts: total=2546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.134 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.134 filename1: (groupid=0, jobs=1): err= 0: pid=91083: Thu Jul 11 07:18:30 2024 00:25:48.134 read: IOPS=244, BW=979KiB/s (1003kB/s)(9804KiB/10010msec) 00:25:48.134 slat (usec): min=4, max=12020, avg=18.94, stdev=255.82 00:25:48.134 clat (msec): min=21, max=155, avg=65.21, stdev=20.17 00:25:48.134 lat (msec): min=21, max=156, avg=65.23, stdev=20.17 00:25:48.134 clat percentiles (msec): 00:25:48.134 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:25:48.134 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 70], 00:25:48.134 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 99], 00:25:48.134 | 99.00th=[ 130], 99.50th=[ 134], 99.90th=[ 157], 99.95th=[ 157], 00:25:48.134 | 99.99th=[ 157] 00:25:48.134 bw ( KiB/s): min= 768, max= 1616, per=3.73%, avg=978.15, stdev=193.33, samples=20 00:25:48.134 iops : min= 192, max= 404, avg=244.50, stdev=48.31, samples=20 00:25:48.134 lat (msec) : 50=22.93%, 100=73.19%, 250=3.88% 00:25:48.134 cpu : usr=38.75%, sys=0.60%, ctx=955, majf=0, minf=9 00:25:48.134 IO depths : 1=2.7%, 2=5.8%, 4=15.3%, 8=65.9%, 16=10.4%, 32=0.0%, >=64=0.0% 00:25:48.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 complete : 0=0.0%, 4=91.5%, 8=3.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 issued rwts: total=2451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.134 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.134 filename1: (groupid=0, jobs=1): err= 0: pid=91084: Thu Jul 11 07:18:30 2024 00:25:48.134 read: IOPS=274, BW=1097KiB/s (1124kB/s)(10.8MiB/10045msec) 00:25:48.134 slat (usec): min=6, max=8051, avg=15.92, stdev=153.34 00:25:48.134 clat (msec): min=20, max=132, avg=58.11, stdev=17.62 00:25:48.134 lat (msec): min=20, max=132, avg=58.12, stdev=17.62 00:25:48.134 clat percentiles (msec): 00:25:48.134 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 44], 00:25:48.134 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 61], 00:25:48.134 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 90], 00:25:48.134 | 99.00th=[ 106], 99.50th=[ 116], 99.90th=[ 133], 99.95th=[ 133], 00:25:48.134 | 99.99th=[ 133] 00:25:48.134 bw ( KiB/s): min= 864, max= 1552, per=4.19%, avg=1098.40, stdev=175.98, samples=20 00:25:48.134 iops : min= 216, max= 388, avg=274.60, stdev=44.00, samples=20 00:25:48.134 lat (msec) : 50=37.30%, 100=60.92%, 250=1.78% 00:25:48.134 cpu : usr=33.92%, sys=0.45%, ctx=907, majf=0, minf=9 00:25:48.134 IO depths : 1=0.3%, 2=1.1%, 4=7.4%, 8=77.5%, 16=13.7%, 32=0.0%, >=64=0.0% 00:25:48.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 complete : 0=0.0%, 4=89.3%, 8=6.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 issued rwts: total=2756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.134 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.134 filename1: (groupid=0, jobs=1): err= 0: pid=91085: Thu Jul 11 07:18:30 2024 00:25:48.134 read: IOPS=254, BW=1016KiB/s (1040kB/s)(9.93MiB/10011msec) 00:25:48.134 slat (usec): min=4, max=8093, avg=21.05, stdev=261.15 00:25:48.134 clat (msec): min=14, max=147, avg=62.84, stdev=18.68 00:25:48.134 lat (msec): min=14, max=147, avg=62.86, stdev=18.69 00:25:48.134 clat percentiles (msec): 00:25:48.134 | 1.00th=[ 23], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 48], 00:25:48.134 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 64], 00:25:48.134 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 84], 95.00th=[ 96], 00:25:48.134 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 148], 99.95th=[ 148], 00:25:48.134 | 99.99th=[ 148] 00:25:48.134 bw ( KiB/s): min= 792, max= 1592, per=3.85%, avg=1010.80, stdev=171.70, samples=20 00:25:48.134 iops : min= 198, max= 398, avg=252.70, stdev=42.93, samples=20 00:25:48.134 lat (msec) : 20=0.79%, 50=20.80%, 100=74.87%, 250=3.54% 00:25:48.134 cpu : usr=38.77%, sys=0.47%, ctx=1064, majf=0, minf=9 00:25:48.134 IO depths : 1=1.2%, 2=3.4%, 4=12.5%, 8=70.0%, 16=13.0%, 32=0.0%, >=64=0.0% 00:25:48.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 complete : 0=0.0%, 4=91.2%, 8=4.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 issued rwts: total=2543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.134 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.134 filename1: (groupid=0, jobs=1): err= 0: pid=91086: Thu Jul 11 07:18:30 2024 00:25:48.134 read: IOPS=271, BW=1087KiB/s (1113kB/s)(10.7MiB/10035msec) 00:25:48.134 slat (usec): min=6, max=8017, avg=18.72, stdev=187.97 00:25:48.134 clat (msec): min=21, max=131, avg=58.75, stdev=17.70 00:25:48.134 lat (msec): min=21, max=131, avg=58.77, stdev=17.71 00:25:48.134 clat percentiles (msec): 00:25:48.134 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 44], 00:25:48.134 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 61], 00:25:48.134 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 94], 00:25:48.134 | 99.00th=[ 108], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:25:48.134 | 99.99th=[ 132] 00:25:48.134 bw ( KiB/s): min= 816, max= 1504, per=4.13%, avg=1083.20, stdev=177.56, samples=20 00:25:48.134 iops : min= 204, max= 376, avg=270.75, stdev=44.39, samples=20 00:25:48.134 lat (msec) : 50=35.46%, 100=62.05%, 250=2.49% 00:25:48.134 cpu : usr=40.80%, sys=0.58%, ctx=1220, majf=0, minf=9 00:25:48.134 IO depths : 1=1.2%, 2=2.9%, 4=10.7%, 8=72.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:25:48.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 complete : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 issued rwts: total=2727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.134 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.134 filename1: (groupid=0, jobs=1): err= 0: pid=91087: Thu Jul 11 07:18:30 2024 00:25:48.134 read: IOPS=315, BW=1260KiB/s (1291kB/s)(12.4MiB/10054msec) 00:25:48.134 slat (usec): min=5, max=8026, avg=18.07, stdev=188.41 00:25:48.134 clat (usec): min=1509, max=133864, avg=50589.92, stdev=18567.81 00:25:48.134 lat (usec): min=1516, max=133872, avg=50607.99, stdev=18571.98 00:25:48.134 clat percentiles (msec): 00:25:48.134 | 1.00th=[ 4], 5.00th=[ 24], 10.00th=[ 32], 20.00th=[ 37], 00:25:48.134 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 54], 00:25:48.134 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 72], 95.00th=[ 84], 00:25:48.134 | 99.00th=[ 102], 99.50th=[ 120], 99.90th=[ 134], 99.95th=[ 134], 00:25:48.134 | 99.99th=[ 134] 00:25:48.134 bw ( KiB/s): min= 640, max= 2345, per=4.81%, avg=1262.05, stdev=333.86, samples=20 00:25:48.134 iops : min= 160, max= 586, avg=315.50, stdev=83.42, samples=20 00:25:48.134 lat (msec) : 2=0.51%, 4=0.51%, 10=1.52%, 20=0.95%, 50=52.08% 00:25:48.134 lat (msec) : 100=43.31%, 250=1.14% 00:25:48.134 cpu : usr=41.02%, sys=0.80%, ctx=1103, majf=0, minf=9 00:25:48.134 IO depths : 1=1.0%, 2=2.1%, 4=8.6%, 8=75.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:25:48.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 issued rwts: total=3168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.134 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.134 filename1: (groupid=0, jobs=1): err= 0: pid=91088: Thu Jul 11 07:18:30 2024 00:25:48.134 read: IOPS=276, BW=1107KiB/s (1134kB/s)(10.9MiB/10035msec) 00:25:48.134 slat (usec): min=6, max=8030, avg=25.16, stdev=313.31 00:25:48.134 clat (msec): min=16, max=143, avg=57.63, stdev=19.79 00:25:48.134 lat (msec): min=16, max=144, avg=57.66, stdev=19.80 00:25:48.134 clat percentiles (msec): 00:25:48.134 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 40], 00:25:48.134 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 61], 00:25:48.134 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 92], 00:25:48.134 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 144], 99.95th=[ 144], 00:25:48.134 | 99.99th=[ 144] 00:25:48.134 bw ( KiB/s): min= 560, max= 1728, per=4.21%, avg=1103.45, stdev=249.74, samples=20 00:25:48.134 iops : min= 140, max= 432, avg=275.80, stdev=62.39, samples=20 00:25:48.134 lat (msec) : 20=0.25%, 50=38.98%, 100=57.92%, 250=2.84% 00:25:48.134 cpu : usr=37.71%, sys=0.58%, ctx=1000, majf=0, minf=9 00:25:48.134 IO depths : 1=1.2%, 2=2.7%, 4=9.7%, 8=74.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:25:48.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.134 issued rwts: total=2778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.134 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.134 filename1: (groupid=0, jobs=1): err= 0: pid=91089: Thu Jul 11 07:18:30 2024 00:25:48.134 read: IOPS=304, BW=1219KiB/s (1248kB/s)(12.0MiB/10044msec) 00:25:48.134 slat (usec): min=5, max=9026, avg=20.00, stdev=253.85 00:25:48.134 clat (msec): min=11, max=106, avg=52.36, stdev=15.88 00:25:48.134 lat (msec): min=11, max=106, avg=52.38, stdev=15.88 00:25:48.135 clat percentiles (msec): 00:25:48.135 | 1.00th=[ 12], 5.00th=[ 30], 10.00th=[ 34], 20.00th=[ 39], 00:25:48.135 | 30.00th=[ 43], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 57], 00:25:48.135 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 72], 95.00th=[ 82], 00:25:48.135 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 107], 99.95th=[ 107], 00:25:48.135 | 99.99th=[ 107] 00:25:48.135 bw ( KiB/s): min= 944, max= 1539, per=4.65%, avg=1218.15, stdev=188.01, samples=20 00:25:48.135 iops : min= 236, max= 384, avg=304.50, stdev=46.94, samples=20 00:25:48.135 lat (msec) : 20=1.24%, 50=46.52%, 100=52.11%, 250=0.13% 00:25:48.135 cpu : usr=40.10%, sys=1.02%, ctx=1634, majf=0, minf=9 00:25:48.135 IO depths : 1=0.8%, 2=1.9%, 4=8.2%, 8=75.9%, 16=13.2%, 32=0.0%, >=64=0.0% 00:25:48.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 complete : 0=0.0%, 4=89.9%, 8=6.0%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 issued rwts: total=3061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.135 filename2: (groupid=0, jobs=1): err= 0: pid=91090: Thu Jul 11 07:18:30 2024 00:25:48.135 read: IOPS=305, BW=1222KiB/s (1252kB/s)(12.0MiB/10028msec) 00:25:48.135 slat (usec): min=6, max=8025, avg=19.56, stdev=207.78 00:25:48.135 clat (msec): min=20, max=123, avg=52.25, stdev=16.67 00:25:48.135 lat (msec): min=20, max=123, avg=52.27, stdev=16.67 00:25:48.135 clat percentiles (msec): 00:25:48.135 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 39], 00:25:48.135 | 30.00th=[ 42], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 56], 00:25:48.135 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 73], 95.00th=[ 85], 00:25:48.135 | 99.00th=[ 104], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 124], 00:25:48.135 | 99.99th=[ 124] 00:25:48.135 bw ( KiB/s): min= 896, max= 1640, per=4.64%, avg=1218.00, stdev=185.20, samples=20 00:25:48.135 iops : min= 224, max= 410, avg=304.45, stdev=46.32, samples=20 00:25:48.135 lat (msec) : 50=54.67%, 100=44.09%, 250=1.24% 00:25:48.135 cpu : usr=40.45%, sys=0.91%, ctx=1497, majf=0, minf=9 00:25:48.135 IO depths : 1=0.2%, 2=0.5%, 4=6.8%, 8=78.9%, 16=13.6%, 32=0.0%, >=64=0.0% 00:25:48.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 complete : 0=0.0%, 4=89.1%, 8=6.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 issued rwts: total=3064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.135 filename2: (groupid=0, jobs=1): err= 0: pid=91091: Thu Jul 11 07:18:30 2024 00:25:48.135 read: IOPS=248, BW=995KiB/s (1018kB/s)(9956KiB/10010msec) 00:25:48.135 slat (usec): min=4, max=8019, avg=16.21, stdev=160.69 00:25:48.135 clat (msec): min=22, max=136, avg=64.24, stdev=18.37 00:25:48.135 lat (msec): min=22, max=136, avg=64.26, stdev=18.37 00:25:48.135 clat percentiles (msec): 00:25:48.135 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:25:48.135 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 61], 60.00th=[ 64], 00:25:48.135 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 86], 95.00th=[ 96], 00:25:48.135 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 136], 99.95th=[ 136], 00:25:48.135 | 99.99th=[ 136] 00:25:48.135 bw ( KiB/s): min= 768, max= 1616, per=3.77%, avg=989.25, stdev=175.92, samples=20 00:25:48.135 iops : min= 192, max= 404, avg=247.30, stdev=43.99, samples=20 00:25:48.135 lat (msec) : 50=23.46%, 100=73.08%, 250=3.46% 00:25:48.135 cpu : usr=32.68%, sys=0.45%, ctx=881, majf=0, minf=9 00:25:48.135 IO depths : 1=1.9%, 2=4.5%, 4=13.6%, 8=68.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:25:48.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 complete : 0=0.0%, 4=91.0%, 8=4.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 issued rwts: total=2489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.135 filename2: (groupid=0, jobs=1): err= 0: pid=91092: Thu Jul 11 07:18:30 2024 00:25:48.135 read: IOPS=249, BW=996KiB/s (1020kB/s)(9976KiB/10015msec) 00:25:48.135 slat (usec): min=6, max=8022, avg=22.62, stdev=253.76 00:25:48.135 clat (msec): min=18, max=129, avg=64.05, stdev=18.61 00:25:48.135 lat (msec): min=18, max=129, avg=64.07, stdev=18.61 00:25:48.135 clat percentiles (msec): 00:25:48.135 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 48], 00:25:48.135 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 66], 00:25:48.135 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 89], 95.00th=[ 97], 00:25:48.135 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:25:48.135 | 99.99th=[ 130] 00:25:48.135 bw ( KiB/s): min= 656, max= 1560, per=3.80%, avg=995.20, stdev=196.86, samples=20 00:25:48.135 iops : min= 164, max= 390, avg=248.80, stdev=49.21, samples=20 00:25:48.135 lat (msec) : 20=0.28%, 50=21.61%, 100=74.66%, 250=3.45% 00:25:48.135 cpu : usr=38.87%, sys=0.60%, ctx=1137, majf=0, minf=9 00:25:48.135 IO depths : 1=2.0%, 2=4.7%, 4=15.3%, 8=66.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:25:48.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 complete : 0=0.0%, 4=91.3%, 8=3.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 issued rwts: total=2494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.135 filename2: (groupid=0, jobs=1): err= 0: pid=91093: Thu Jul 11 07:18:30 2024 00:25:48.135 read: IOPS=265, BW=1064KiB/s (1089kB/s)(10.4MiB/10005msec) 00:25:48.135 slat (usec): min=6, max=8032, avg=21.59, stdev=269.17 00:25:48.135 clat (msec): min=7, max=143, avg=60.01, stdev=18.01 00:25:48.135 lat (msec): min=7, max=143, avg=60.03, stdev=18.01 00:25:48.135 clat percentiles (msec): 00:25:48.135 | 1.00th=[ 25], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 46], 00:25:48.135 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 62], 00:25:48.135 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 92], 00:25:48.135 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 144], 00:25:48.135 | 99.99th=[ 144] 00:25:48.135 bw ( KiB/s): min= 856, max= 1408, per=4.00%, avg=1048.00, stdev=145.91, samples=19 00:25:48.135 iops : min= 214, max= 352, avg=262.00, stdev=36.48, samples=19 00:25:48.135 lat (msec) : 10=0.38%, 20=0.23%, 50=29.99%, 100=67.12%, 250=2.29% 00:25:48.135 cpu : usr=37.22%, sys=0.46%, ctx=1025, majf=0, minf=9 00:25:48.135 IO depths : 1=1.4%, 2=3.2%, 4=11.5%, 8=72.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:25:48.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 complete : 0=0.0%, 4=90.3%, 8=4.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 issued rwts: total=2661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.135 filename2: (groupid=0, jobs=1): err= 0: pid=91094: Thu Jul 11 07:18:30 2024 00:25:48.135 read: IOPS=267, BW=1071KiB/s (1097kB/s)(10.5MiB/10016msec) 00:25:48.135 slat (usec): min=6, max=8025, avg=18.65, stdev=189.35 00:25:48.135 clat (msec): min=12, max=167, avg=59.61, stdev=19.89 00:25:48.135 lat (msec): min=12, max=167, avg=59.63, stdev=19.88 00:25:48.135 clat percentiles (msec): 00:25:48.135 | 1.00th=[ 21], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 43], 00:25:48.135 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 62], 00:25:48.135 | 70.00th=[ 67], 80.00th=[ 75], 90.00th=[ 84], 95.00th=[ 96], 00:25:48.135 | 99.00th=[ 120], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 167], 00:25:48.135 | 99.99th=[ 167] 00:25:48.135 bw ( KiB/s): min= 696, max= 1768, per=4.07%, avg=1066.40, stdev=225.50, samples=20 00:25:48.135 iops : min= 174, max= 442, avg=266.60, stdev=56.38, samples=20 00:25:48.135 lat (msec) : 20=0.82%, 50=31.28%, 100=64.80%, 250=3.09% 00:25:48.135 cpu : usr=38.82%, sys=0.66%, ctx=1128, majf=0, minf=9 00:25:48.135 IO depths : 1=1.5%, 2=3.4%, 4=10.6%, 8=72.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:25:48.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 issued rwts: total=2682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.135 filename2: (groupid=0, jobs=1): err= 0: pid=91095: Thu Jul 11 07:18:30 2024 00:25:48.135 read: IOPS=240, BW=962KiB/s (985kB/s)(9632KiB/10017msec) 00:25:48.135 slat (usec): min=4, max=8023, avg=17.28, stdev=163.46 00:25:48.135 clat (msec): min=21, max=141, avg=66.40, stdev=18.38 00:25:48.135 lat (msec): min=21, max=141, avg=66.42, stdev=18.38 00:25:48.135 clat percentiles (msec): 00:25:48.135 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 56], 00:25:48.135 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 71], 00:25:48.135 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 92], 95.00th=[ 96], 00:25:48.135 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 142], 99.95th=[ 142], 00:25:48.135 | 99.99th=[ 142] 00:25:48.135 bw ( KiB/s): min= 768, max= 1408, per=3.66%, avg=958.85, stdev=153.91, samples=20 00:25:48.135 iops : min= 192, max= 352, avg=239.70, stdev=38.48, samples=20 00:25:48.135 lat (msec) : 50=16.69%, 100=79.86%, 250=3.45% 00:25:48.135 cpu : usr=32.59%, sys=0.58%, ctx=887, majf=0, minf=9 00:25:48.135 IO depths : 1=2.4%, 2=5.1%, 4=14.3%, 8=67.4%, 16=10.8%, 32=0.0%, >=64=0.0% 00:25:48.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 issued rwts: total=2408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.135 filename2: (groupid=0, jobs=1): err= 0: pid=91096: Thu Jul 11 07:18:30 2024 00:25:48.135 read: IOPS=247, BW=990KiB/s (1014kB/s)(9932KiB/10028msec) 00:25:48.135 slat (usec): min=3, max=8029, avg=28.60, stdev=331.70 00:25:48.135 clat (msec): min=17, max=126, avg=64.45, stdev=18.17 00:25:48.135 lat (msec): min=17, max=126, avg=64.48, stdev=18.17 00:25:48.135 clat percentiles (msec): 00:25:48.135 | 1.00th=[ 25], 5.00th=[ 34], 10.00th=[ 41], 20.00th=[ 52], 00:25:48.135 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 68], 00:25:48.135 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 89], 95.00th=[ 96], 00:25:48.135 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 127], 99.95th=[ 127], 00:25:48.135 | 99.99th=[ 127] 00:25:48.135 bw ( KiB/s): min= 768, max= 1536, per=3.76%, avg=986.40, stdev=159.71, samples=20 00:25:48.135 iops : min= 192, max= 384, avg=246.55, stdev=39.95, samples=20 00:25:48.135 lat (msec) : 20=0.24%, 50=19.01%, 100=77.93%, 250=2.82% 00:25:48.135 cpu : usr=38.15%, sys=0.73%, ctx=1323, majf=0, minf=9 00:25:48.135 IO depths : 1=1.5%, 2=3.2%, 4=9.6%, 8=72.6%, 16=13.1%, 32=0.0%, >=64=0.0% 00:25:48.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 complete : 0=0.0%, 4=90.4%, 8=6.0%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.135 issued rwts: total=2483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.135 filename2: (groupid=0, jobs=1): err= 0: pid=91097: Thu Jul 11 07:18:30 2024 00:25:48.135 read: IOPS=246, BW=987KiB/s (1011kB/s)(9884KiB/10011msec) 00:25:48.135 slat (usec): min=5, max=11030, avg=33.78, stdev=407.77 00:25:48.135 clat (msec): min=24, max=145, avg=64.64, stdev=19.00 00:25:48.135 lat (msec): min=24, max=145, avg=64.68, stdev=19.01 00:25:48.135 clat percentiles (msec): 00:25:48.135 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:25:48.136 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 67], 00:25:48.136 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 89], 95.00th=[ 96], 00:25:48.136 | 99.00th=[ 120], 99.50th=[ 134], 99.90th=[ 146], 99.95th=[ 146], 00:25:48.136 | 99.99th=[ 146] 00:25:48.136 bw ( KiB/s): min= 768, max= 1408, per=3.74%, avg=982.00, stdev=156.10, samples=20 00:25:48.136 iops : min= 192, max= 352, avg=245.50, stdev=39.02, samples=20 00:25:48.136 lat (msec) : 50=22.99%, 100=73.98%, 250=3.04% 00:25:48.136 cpu : usr=39.73%, sys=0.44%, ctx=1109, majf=0, minf=9 00:25:48.136 IO depths : 1=1.9%, 2=4.5%, 4=13.8%, 8=68.6%, 16=11.2%, 32=0.0%, >=64=0.0% 00:25:48.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.136 complete : 0=0.0%, 4=91.2%, 8=3.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.136 issued rwts: total=2471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.136 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.136 00:25:48.136 Run status group 0 (all jobs): 00:25:48.136 READ: bw=25.6MiB/s (26.8MB/s), 962KiB/s-1299KiB/s (985kB/s-1330kB/s), io=257MiB (270MB), run=10005-10054msec 00:25:48.136 07:18:30 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:48.136 07:18:30 -- target/dif.sh@43 -- # local sub 00:25:48.136 07:18:30 -- target/dif.sh@45 -- # for sub in "$@" 00:25:48.136 07:18:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:48.136 07:18:30 -- target/dif.sh@36 -- # local sub_id=0 00:25:48.136 07:18:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@45 -- # for sub in "$@" 00:25:48.136 07:18:30 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:48.136 07:18:30 -- target/dif.sh@36 -- # local sub_id=1 00:25:48.136 07:18:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@45 -- # for sub in "$@" 00:25:48.136 07:18:30 -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:48.136 07:18:30 -- target/dif.sh@36 -- # local sub_id=2 00:25:48.136 07:18:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@115 -- # NULL_DIF=1 00:25:48.136 07:18:30 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:48.136 07:18:30 -- target/dif.sh@115 -- # numjobs=2 00:25:48.136 07:18:30 -- target/dif.sh@115 -- # iodepth=8 00:25:48.136 07:18:30 -- target/dif.sh@115 -- # runtime=5 00:25:48.136 07:18:30 -- target/dif.sh@115 -- # files=1 00:25:48.136 07:18:30 -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:48.136 07:18:30 -- target/dif.sh@28 -- # local sub 00:25:48.136 07:18:30 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.136 07:18:30 -- target/dif.sh@31 -- # create_subsystem 0 00:25:48.136 07:18:30 -- target/dif.sh@18 -- # local sub_id=0 00:25:48.136 07:18:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 bdev_null0 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 [2024-07-11 07:18:30.444690] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.136 07:18:30 -- target/dif.sh@31 -- # create_subsystem 1 00:25:48.136 07:18:30 -- target/dif.sh@18 -- # local sub_id=1 00:25:48.136 07:18:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 bdev_null1 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.136 07:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.136 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:48.136 07:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.136 07:18:30 -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:48.136 07:18:30 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:48.136 07:18:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:48.136 07:18:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.136 07:18:30 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.136 07:18:30 -- nvmf/common.sh@520 -- # config=() 00:25:48.136 07:18:30 -- target/dif.sh@82 -- # gen_fio_conf 00:25:48.136 07:18:30 -- nvmf/common.sh@520 -- # local subsystem config 00:25:48.136 07:18:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:48.136 07:18:30 -- target/dif.sh@54 -- # local file 00:25:48.136 07:18:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:48.136 07:18:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:48.136 07:18:30 -- target/dif.sh@56 -- # cat 00:25:48.136 07:18:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:48.136 07:18:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:48.136 { 00:25:48.136 "params": { 00:25:48.136 "name": "Nvme$subsystem", 00:25:48.136 "trtype": "$TEST_TRANSPORT", 00:25:48.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.136 "adrfam": "ipv4", 00:25:48.136 "trsvcid": "$NVMF_PORT", 00:25:48.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.136 "hdgst": ${hdgst:-false}, 00:25:48.136 "ddgst": ${ddgst:-false} 00:25:48.136 }, 00:25:48.136 "method": "bdev_nvme_attach_controller" 00:25:48.136 } 00:25:48.136 EOF 00:25:48.136 )") 00:25:48.136 07:18:30 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:48.136 07:18:30 -- common/autotest_common.sh@1320 -- # shift 00:25:48.136 07:18:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:48.136 07:18:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.136 07:18:30 -- nvmf/common.sh@542 -- # cat 00:25:48.136 07:18:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:48.136 07:18:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:48.136 07:18:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:48.136 07:18:30 -- target/dif.sh@72 -- # (( file <= files )) 00:25:48.136 07:18:30 -- target/dif.sh@73 -- # cat 00:25:48.136 07:18:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:48.136 07:18:30 -- target/dif.sh@72 -- # (( file++ )) 00:25:48.136 07:18:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:48.136 07:18:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:48.136 { 00:25:48.136 "params": { 00:25:48.136 "name": "Nvme$subsystem", 00:25:48.136 "trtype": "$TEST_TRANSPORT", 00:25:48.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.136 "adrfam": "ipv4", 00:25:48.136 "trsvcid": "$NVMF_PORT", 00:25:48.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.136 "hdgst": ${hdgst:-false}, 00:25:48.136 "ddgst": ${ddgst:-false} 00:25:48.136 }, 00:25:48.136 "method": "bdev_nvme_attach_controller" 00:25:48.136 } 00:25:48.136 EOF 00:25:48.136 )") 00:25:48.136 07:18:30 -- target/dif.sh@72 -- # (( file <= files )) 00:25:48.136 07:18:30 -- nvmf/common.sh@542 -- # cat 00:25:48.136 07:18:30 -- nvmf/common.sh@544 -- # jq . 00:25:48.136 07:18:30 -- nvmf/common.sh@545 -- # IFS=, 00:25:48.136 07:18:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:48.136 "params": { 00:25:48.136 "name": "Nvme0", 00:25:48.136 "trtype": "tcp", 00:25:48.136 "traddr": "10.0.0.2", 00:25:48.136 "adrfam": "ipv4", 00:25:48.136 "trsvcid": "4420", 00:25:48.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:48.136 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:48.136 "hdgst": false, 00:25:48.136 "ddgst": false 00:25:48.136 }, 00:25:48.136 "method": "bdev_nvme_attach_controller" 00:25:48.136 },{ 00:25:48.136 "params": { 00:25:48.136 "name": "Nvme1", 00:25:48.136 "trtype": "tcp", 00:25:48.136 "traddr": "10.0.0.2", 00:25:48.136 "adrfam": "ipv4", 00:25:48.136 "trsvcid": "4420", 00:25:48.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:48.136 "hdgst": false, 00:25:48.136 "ddgst": false 00:25:48.137 }, 00:25:48.137 "method": "bdev_nvme_attach_controller" 00:25:48.137 }' 00:25:48.137 07:18:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:48.137 07:18:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:48.137 07:18:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.137 07:18:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:48.137 07:18:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:48.137 07:18:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:48.137 07:18:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:48.137 07:18:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:48.137 07:18:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:48.137 07:18:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.137 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:48.137 ... 00:25:48.137 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:48.137 ... 00:25:48.137 fio-3.35 00:25:48.137 Starting 4 threads 00:25:48.137 [2024-07-11 07:18:31.224541] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:48.137 [2024-07-11 07:18:31.224601] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:52.321 00:25:52.321 filename0: (groupid=0, jobs=1): err= 0: pid=91233: Thu Jul 11 07:18:36 2024 00:25:52.321 read: IOPS=2223, BW=17.4MiB/s (18.2MB/s)(86.9MiB/5003msec) 00:25:52.321 slat (nsec): min=5877, max=83334, avg=15042.46, stdev=8748.09 00:25:52.321 clat (usec): min=1026, max=5248, avg=3529.15, stdev=222.33 00:25:52.321 lat (usec): min=1033, max=5260, avg=3544.19, stdev=222.68 00:25:52.321 clat percentiles (usec): 00:25:52.321 | 1.00th=[ 2868], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3392], 00:25:52.321 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3556], 00:25:52.321 | 70.00th=[ 3589], 80.00th=[ 3654], 90.00th=[ 3752], 95.00th=[ 3851], 00:25:52.321 | 99.00th=[ 4178], 99.50th=[ 4359], 99.90th=[ 5080], 99.95th=[ 5211], 00:25:52.321 | 99.99th=[ 5276] 00:25:52.321 bw ( KiB/s): min=17328, max=18853, per=25.09%, avg=17863.67, stdev=430.99, samples=9 00:25:52.321 iops : min= 2166, max= 2356, avg=2232.89, stdev=53.69, samples=9 00:25:52.321 lat (msec) : 2=0.05%, 4=97.27%, 10=2.68% 00:25:52.321 cpu : usr=94.92%, sys=3.82%, ctx=4, majf=0, minf=0 00:25:52.322 IO depths : 1=8.5%, 2=19.4%, 4=55.5%, 8=16.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:52.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.322 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.322 issued rwts: total=11126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.322 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:52.322 filename0: (groupid=0, jobs=1): err= 0: pid=91234: Thu Jul 11 07:18:36 2024 00:25:52.322 read: IOPS=2223, BW=17.4MiB/s (18.2MB/s)(86.9MiB/5001msec) 00:25:52.322 slat (nsec): min=6114, max=86103, avg=16235.05, stdev=7875.05 00:25:52.322 clat (usec): min=1333, max=6188, avg=3520.13, stdev=261.15 00:25:52.322 lat (usec): min=1344, max=6194, avg=3536.36, stdev=262.22 00:25:52.322 clat percentiles (usec): 00:25:52.322 | 1.00th=[ 2704], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3392], 00:25:52.322 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3556], 00:25:52.322 | 70.00th=[ 3589], 80.00th=[ 3621], 90.00th=[ 3720], 95.00th=[ 3785], 00:25:52.322 | 99.00th=[ 4228], 99.50th=[ 5080], 99.90th=[ 5866], 99.95th=[ 5932], 00:25:52.322 | 99.99th=[ 6194] 00:25:52.322 bw ( KiB/s): min=17392, max=18853, per=25.09%, avg=17865.44, stdev=428.16, samples=9 00:25:52.322 iops : min= 2174, max= 2356, avg=2233.11, stdev=53.34, samples=9 00:25:52.322 lat (msec) : 2=0.22%, 4=97.86%, 10=1.92% 00:25:52.322 cpu : usr=94.36%, sys=4.38%, ctx=4, majf=0, minf=1 00:25:52.322 IO depths : 1=8.0%, 2=25.0%, 4=50.0%, 8=17.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:52.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.322 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.322 issued rwts: total=11120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.322 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:52.322 filename1: (groupid=0, jobs=1): err= 0: pid=91235: Thu Jul 11 07:18:36 2024 00:25:52.322 read: IOPS=2230, BW=17.4MiB/s (18.3MB/s)(87.2MiB/5001msec) 00:25:52.322 slat (usec): min=5, max=119, avg= 9.92, stdev= 6.61 00:25:52.322 clat (usec): min=959, max=4590, avg=3537.37, stdev=206.16 00:25:52.322 lat (usec): min=966, max=4598, avg=3547.29, stdev=206.40 00:25:52.322 clat percentiles (usec): 00:25:52.322 | 1.00th=[ 3195], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3425], 00:25:52.322 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3589], 00:25:52.322 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 3720], 95.00th=[ 3818], 00:25:52.322 | 99.00th=[ 3982], 99.50th=[ 4015], 99.90th=[ 4228], 99.95th=[ 4293], 00:25:52.322 | 99.99th=[ 4359] 00:25:52.322 bw ( KiB/s): min=17408, max=18906, per=25.16%, avg=17915.78, stdev=437.19, samples=9 00:25:52.322 iops : min= 2176, max= 2363, avg=2239.44, stdev=54.58, samples=9 00:25:52.322 lat (usec) : 1000=0.04% 00:25:52.322 lat (msec) : 2=0.30%, 4=99.09%, 10=0.58% 00:25:52.322 cpu : usr=94.68%, sys=4.14%, ctx=17, majf=0, minf=9 00:25:52.322 IO depths : 1=8.4%, 2=22.7%, 4=52.2%, 8=16.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:52.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.322 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.322 issued rwts: total=11156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.322 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:52.322 filename1: (groupid=0, jobs=1): err= 0: pid=91236: Thu Jul 11 07:18:36 2024 00:25:52.322 read: IOPS=2223, BW=17.4MiB/s (18.2MB/s)(86.9MiB/5002msec) 00:25:52.322 slat (nsec): min=5970, max=76352, avg=16784.60, stdev=8244.24 00:25:52.322 clat (usec): min=1267, max=6061, avg=3521.35, stdev=211.13 00:25:52.322 lat (usec): min=1284, max=6082, avg=3538.14, stdev=212.13 00:25:52.322 clat percentiles (usec): 00:25:52.322 | 1.00th=[ 2835], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3392], 00:25:52.322 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3556], 00:25:52.322 | 70.00th=[ 3589], 80.00th=[ 3654], 90.00th=[ 3720], 95.00th=[ 3818], 00:25:52.322 | 99.00th=[ 4228], 99.50th=[ 4359], 99.90th=[ 5080], 99.95th=[ 5538], 00:25:52.322 | 99.99th=[ 5800] 00:25:52.322 bw ( KiB/s): min=17408, max=18816, per=25.09%, avg=17863.11, stdev=415.32, samples=9 00:25:52.322 iops : min= 2176, max= 2352, avg=2232.89, stdev=51.91, samples=9 00:25:52.322 lat (msec) : 2=0.06%, 4=97.92%, 10=2.01% 00:25:52.322 cpu : usr=94.96%, sys=3.74%, ctx=19, majf=0, minf=0 00:25:52.322 IO depths : 1=9.2%, 2=25.0%, 4=50.0%, 8=15.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:52.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.322 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.322 issued rwts: total=11120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.322 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:52.322 00:25:52.322 Run status group 0 (all jobs): 00:25:52.322 READ: bw=69.5MiB/s (72.9MB/s), 17.4MiB/s-17.4MiB/s (18.2MB/s-18.3MB/s), io=348MiB (365MB), run=5001-5003msec 00:25:52.579 07:18:36 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:52.579 07:18:36 -- target/dif.sh@43 -- # local sub 00:25:52.579 07:18:36 -- target/dif.sh@45 -- # for sub in "$@" 00:25:52.579 07:18:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:52.579 07:18:36 -- target/dif.sh@36 -- # local sub_id=0 00:25:52.579 07:18:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:52.579 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.579 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:25:52.838 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.838 07:18:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:52.838 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.838 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:25:52.838 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.838 07:18:36 -- target/dif.sh@45 -- # for sub in "$@" 00:25:52.838 07:18:36 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:52.838 07:18:36 -- target/dif.sh@36 -- # local sub_id=1 00:25:52.838 07:18:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:52.838 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.838 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:25:52.838 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.838 07:18:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:52.838 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.838 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:25:52.838 ************************************ 00:25:52.838 END TEST fio_dif_rand_params 00:25:52.838 ************************************ 00:25:52.838 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.838 00:25:52.838 real 0m23.683s 00:25:52.838 user 2m7.380s 00:25:52.838 sys 0m3.838s 00:25:52.838 07:18:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.838 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:25:52.838 07:18:36 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:52.838 07:18:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:52.838 07:18:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:52.838 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:25:52.838 ************************************ 00:25:52.838 START TEST fio_dif_digest 00:25:52.838 ************************************ 00:25:52.838 07:18:36 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:25:52.838 07:18:36 -- target/dif.sh@123 -- # local NULL_DIF 00:25:52.838 07:18:36 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:52.838 07:18:36 -- target/dif.sh@125 -- # local hdgst ddgst 00:25:52.838 07:18:36 -- target/dif.sh@127 -- # NULL_DIF=3 00:25:52.838 07:18:36 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:52.838 07:18:36 -- target/dif.sh@127 -- # numjobs=3 00:25:52.838 07:18:36 -- target/dif.sh@127 -- # iodepth=3 00:25:52.838 07:18:36 -- target/dif.sh@127 -- # runtime=10 00:25:52.838 07:18:36 -- target/dif.sh@128 -- # hdgst=true 00:25:52.838 07:18:36 -- target/dif.sh@128 -- # ddgst=true 00:25:52.838 07:18:36 -- target/dif.sh@130 -- # create_subsystems 0 00:25:52.838 07:18:36 -- target/dif.sh@28 -- # local sub 00:25:52.838 07:18:36 -- target/dif.sh@30 -- # for sub in "$@" 00:25:52.838 07:18:36 -- target/dif.sh@31 -- # create_subsystem 0 00:25:52.838 07:18:36 -- target/dif.sh@18 -- # local sub_id=0 00:25:52.838 07:18:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:52.838 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.838 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:25:52.838 bdev_null0 00:25:52.838 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.838 07:18:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:52.838 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.838 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:25:52.838 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.838 07:18:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:52.838 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.838 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:25:52.838 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.838 07:18:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:52.838 07:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.838 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:25:52.838 [2024-07-11 07:18:36.764223] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.838 07:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.838 07:18:36 -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:52.838 07:18:36 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:52.838 07:18:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:52.838 07:18:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:52.838 07:18:36 -- nvmf/common.sh@520 -- # config=() 00:25:52.838 07:18:36 -- nvmf/common.sh@520 -- # local subsystem config 00:25:52.838 07:18:36 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:52.838 07:18:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:52.838 07:18:36 -- target/dif.sh@82 -- # gen_fio_conf 00:25:52.838 07:18:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:52.838 07:18:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:52.838 { 00:25:52.838 "params": { 00:25:52.838 "name": "Nvme$subsystem", 00:25:52.838 "trtype": "$TEST_TRANSPORT", 00:25:52.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:52.838 "adrfam": "ipv4", 00:25:52.838 "trsvcid": "$NVMF_PORT", 00:25:52.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:52.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:52.838 "hdgst": ${hdgst:-false}, 00:25:52.838 "ddgst": ${ddgst:-false} 00:25:52.838 }, 00:25:52.838 "method": "bdev_nvme_attach_controller" 00:25:52.838 } 00:25:52.838 EOF 00:25:52.838 )") 00:25:52.838 07:18:36 -- target/dif.sh@54 -- # local file 00:25:52.838 07:18:36 -- target/dif.sh@56 -- # cat 00:25:52.838 07:18:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:52.838 07:18:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:52.838 07:18:36 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:52.838 07:18:36 -- common/autotest_common.sh@1320 -- # shift 00:25:52.838 07:18:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:52.838 07:18:36 -- nvmf/common.sh@542 -- # cat 00:25:52.838 07:18:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:52.838 07:18:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:52.838 07:18:36 -- target/dif.sh@72 -- # (( file <= files )) 00:25:52.838 07:18:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:52.838 07:18:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:52.838 07:18:36 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:52.838 07:18:36 -- nvmf/common.sh@544 -- # jq . 00:25:52.838 07:18:36 -- nvmf/common.sh@545 -- # IFS=, 00:25:52.838 07:18:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:52.838 "params": { 00:25:52.838 "name": "Nvme0", 00:25:52.838 "trtype": "tcp", 00:25:52.838 "traddr": "10.0.0.2", 00:25:52.838 "adrfam": "ipv4", 00:25:52.838 "trsvcid": "4420", 00:25:52.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:52.838 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:52.838 "hdgst": true, 00:25:52.838 "ddgst": true 00:25:52.838 }, 00:25:52.838 "method": "bdev_nvme_attach_controller" 00:25:52.838 }' 00:25:52.838 07:18:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:52.838 07:18:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:52.838 07:18:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:52.838 07:18:36 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:52.838 07:18:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:52.838 07:18:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:52.838 07:18:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:52.838 07:18:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:52.838 07:18:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:52.838 07:18:36 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.097 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:53.097 ... 00:25:53.097 fio-3.35 00:25:53.097 Starting 3 threads 00:25:53.355 [2024-07-11 07:18:37.383187] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:53.355 [2024-07-11 07:18:37.383271] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:05.559 00:26:05.559 filename0: (groupid=0, jobs=1): err= 0: pid=91338: Thu Jul 11 07:18:47 2024 00:26:05.559 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(273MiB/10004msec) 00:26:05.559 slat (nsec): min=5690, max=61383, avg=15527.16, stdev=6338.80 00:26:05.559 clat (usec): min=4999, max=23733, avg=13716.28, stdev=1921.67 00:26:05.559 lat (usec): min=5012, max=23749, avg=13731.81, stdev=1922.98 00:26:05.559 clat percentiles (usec): 00:26:05.559 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[13173], 00:26:05.559 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:26:05.559 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15401], 95.00th=[15795], 00:26:05.559 | 99.00th=[16909], 99.50th=[17433], 99.90th=[22152], 99.95th=[22152], 00:26:05.559 | 99.99th=[23725] 00:26:05.559 bw ( KiB/s): min=25600, max=32000, per=29.51%, avg=27917.47, stdev=1766.37, samples=19 00:26:05.559 iops : min= 200, max= 250, avg=218.11, stdev=13.80, samples=19 00:26:05.559 lat (msec) : 10=9.89%, 20=89.98%, 50=0.14% 00:26:05.559 cpu : usr=94.43%, sys=3.99%, ctx=16, majf=0, minf=9 00:26:05.559 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.559 issued rwts: total=2185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.559 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:05.559 filename0: (groupid=0, jobs=1): err= 0: pid=91339: Thu Jul 11 07:18:47 2024 00:26:05.559 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(326MiB/10049msec) 00:26:05.559 slat (nsec): min=6279, max=69427, avg=17913.64, stdev=6721.33 00:26:05.559 clat (usec): min=7825, max=52909, avg=11535.22, stdev=7185.16 00:26:05.559 lat (usec): min=7843, max=52930, avg=11553.13, stdev=7185.09 00:26:05.559 clat percentiles (usec): 00:26:05.559 | 1.00th=[ 8586], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:26:05.559 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:26:05.559 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11469], 95.00th=[11994], 00:26:05.559 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:26:05.559 | 99.99th=[52691] 00:26:05.559 bw ( KiB/s): min=25344, max=39424, per=35.22%, avg=33318.40, stdev=4196.88, samples=20 00:26:05.559 iops : min= 198, max= 308, avg=260.30, stdev=32.79, samples=20 00:26:05.559 lat (msec) : 10=37.74%, 20=59.08%, 50=0.65%, 100=2.53% 00:26:05.559 cpu : usr=94.93%, sys=3.73%, ctx=12, majf=0, minf=0 00:26:05.559 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.559 issued rwts: total=2605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.559 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:05.559 filename0: (groupid=0, jobs=1): err= 0: pid=91340: Thu Jul 11 07:18:47 2024 00:26:05.559 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(330MiB/10006msec) 00:26:05.559 slat (nsec): min=6178, max=61472, avg=13755.65, stdev=6736.79 00:26:05.559 clat (usec): min=5906, max=15619, avg=11368.71, stdev=1959.71 00:26:05.559 lat (usec): min=5917, max=15629, avg=11382.46, stdev=1960.87 00:26:05.559 clat percentiles (usec): 00:26:05.559 | 1.00th=[ 6521], 5.00th=[ 7111], 10.00th=[ 7570], 20.00th=[10421], 00:26:05.559 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:26:05.559 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:26:05.559 | 99.00th=[14353], 99.50th=[14615], 99.90th=[15008], 99.95th=[15270], 00:26:05.559 | 99.99th=[15664] 00:26:05.559 bw ( KiB/s): min=29952, max=38912, per=35.67%, avg=33738.11, stdev=2484.33, samples=19 00:26:05.559 iops : min= 234, max= 304, avg=263.58, stdev=19.41, samples=19 00:26:05.559 lat (msec) : 10=17.87%, 20=82.13% 00:26:05.559 cpu : usr=94.06%, sys=4.39%, ctx=28, majf=0, minf=9 00:26:05.559 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.559 issued rwts: total=2636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.559 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:05.559 00:26:05.559 Run status group 0 (all jobs): 00:26:05.559 READ: bw=92.4MiB/s (96.9MB/s), 27.3MiB/s-32.9MiB/s (28.6MB/s-34.5MB/s), io=928MiB (973MB), run=10004-10049msec 00:26:05.559 07:18:47 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:05.559 07:18:47 -- target/dif.sh@43 -- # local sub 00:26:05.559 07:18:47 -- target/dif.sh@45 -- # for sub in "$@" 00:26:05.559 07:18:47 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:05.559 07:18:47 -- target/dif.sh@36 -- # local sub_id=0 00:26:05.559 07:18:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:05.559 07:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.559 07:18:47 -- common/autotest_common.sh@10 -- # set +x 00:26:05.559 07:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.559 07:18:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:05.559 07:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.559 07:18:47 -- common/autotest_common.sh@10 -- # set +x 00:26:05.559 ************************************ 00:26:05.559 END TEST fio_dif_digest 00:26:05.559 ************************************ 00:26:05.559 07:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.559 00:26:05.559 real 0m11.073s 00:26:05.559 user 0m29.060s 00:26:05.559 sys 0m1.517s 00:26:05.559 07:18:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.559 07:18:47 -- common/autotest_common.sh@10 -- # set +x 00:26:05.559 07:18:47 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:05.559 07:18:47 -- target/dif.sh@147 -- # nvmftestfini 00:26:05.559 07:18:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:05.559 07:18:47 -- nvmf/common.sh@116 -- # sync 00:26:05.559 07:18:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:05.559 07:18:47 -- nvmf/common.sh@119 -- # set +e 00:26:05.559 07:18:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:05.559 07:18:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:05.559 rmmod nvme_tcp 00:26:05.559 rmmod nvme_fabrics 00:26:05.559 rmmod nvme_keyring 00:26:05.559 07:18:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:05.559 07:18:47 -- nvmf/common.sh@123 -- # set -e 00:26:05.559 07:18:47 -- nvmf/common.sh@124 -- # return 0 00:26:05.559 07:18:47 -- nvmf/common.sh@477 -- # '[' -n 90573 ']' 00:26:05.559 07:18:47 -- nvmf/common.sh@478 -- # killprocess 90573 00:26:05.559 07:18:47 -- common/autotest_common.sh@926 -- # '[' -z 90573 ']' 00:26:05.559 07:18:47 -- common/autotest_common.sh@930 -- # kill -0 90573 00:26:05.559 07:18:47 -- common/autotest_common.sh@931 -- # uname 00:26:05.560 07:18:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:05.560 07:18:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90573 00:26:05.560 killing process with pid 90573 00:26:05.560 07:18:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:05.560 07:18:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:05.560 07:18:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90573' 00:26:05.560 07:18:47 -- common/autotest_common.sh@945 -- # kill 90573 00:26:05.560 07:18:47 -- common/autotest_common.sh@950 -- # wait 90573 00:26:05.560 07:18:48 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:05.560 07:18:48 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:05.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:05.560 Waiting for block devices as requested 00:26:05.560 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:05.560 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:05.560 07:18:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:05.560 07:18:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:05.560 07:18:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:05.560 07:18:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:05.560 07:18:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.560 07:18:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:05.560 07:18:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.560 07:18:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:05.560 00:26:05.560 real 1m0.108s 00:26:05.560 user 3m51.375s 00:26:05.560 sys 0m14.350s 00:26:05.560 07:18:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.560 ************************************ 00:26:05.560 END TEST nvmf_dif 00:26:05.560 ************************************ 00:26:05.560 07:18:48 -- common/autotest_common.sh@10 -- # set +x 00:26:05.560 07:18:48 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:05.560 07:18:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:05.560 07:18:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:05.560 07:18:48 -- common/autotest_common.sh@10 -- # set +x 00:26:05.560 ************************************ 00:26:05.560 START TEST nvmf_abort_qd_sizes 00:26:05.560 ************************************ 00:26:05.560 07:18:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:05.560 * Looking for test storage... 00:26:05.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:05.560 07:18:48 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:05.560 07:18:48 -- nvmf/common.sh@7 -- # uname -s 00:26:05.560 07:18:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.560 07:18:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.560 07:18:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.560 07:18:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.560 07:18:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.560 07:18:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.560 07:18:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.560 07:18:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.560 07:18:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.560 07:18:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.560 07:18:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 00:26:05.560 07:18:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=4394e380-0dda-4e84-a19e-ee0fd4897b77 00:26:05.560 07:18:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.560 07:18:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.560 07:18:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:05.560 07:18:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:05.560 07:18:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.560 07:18:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.560 07:18:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.560 07:18:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.560 07:18:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.560 07:18:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.560 07:18:48 -- paths/export.sh@5 -- # export PATH 00:26:05.560 07:18:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.560 07:18:48 -- nvmf/common.sh@46 -- # : 0 00:26:05.560 07:18:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:05.560 07:18:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:05.560 07:18:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:05.560 07:18:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.560 07:18:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.560 07:18:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:05.560 07:18:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:05.560 07:18:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:05.560 07:18:48 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:05.560 07:18:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:05.560 07:18:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.560 07:18:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:05.560 07:18:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:05.560 07:18:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:05.560 07:18:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.560 07:18:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:05.560 07:18:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.560 07:18:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:05.560 07:18:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:05.560 07:18:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:05.560 07:18:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:05.560 07:18:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:05.560 07:18:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:05.560 07:18:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.560 07:18:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.560 07:18:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:05.560 07:18:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:05.560 07:18:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:05.560 07:18:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:05.560 07:18:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:05.560 07:18:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.560 07:18:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:05.560 07:18:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:05.560 07:18:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:05.560 07:18:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:05.560 07:18:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:05.560 07:18:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:05.560 Cannot find device "nvmf_tgt_br" 00:26:05.560 07:18:49 -- nvmf/common.sh@154 -- # true 00:26:05.560 07:18:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:05.560 Cannot find device "nvmf_tgt_br2" 00:26:05.560 07:18:49 -- nvmf/common.sh@155 -- # true 00:26:05.560 07:18:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:05.560 07:18:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:05.560 Cannot find device "nvmf_tgt_br" 00:26:05.560 07:18:49 -- nvmf/common.sh@157 -- # true 00:26:05.560 07:18:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:05.560 Cannot find device "nvmf_tgt_br2" 00:26:05.560 07:18:49 -- nvmf/common.sh@158 -- # true 00:26:05.560 07:18:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:05.560 07:18:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:05.560 07:18:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:05.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:05.560 07:18:49 -- nvmf/common.sh@161 -- # true 00:26:05.560 07:18:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:05.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:05.560 07:18:49 -- nvmf/common.sh@162 -- # true 00:26:05.560 07:18:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:05.560 07:18:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:05.560 07:18:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:05.560 07:18:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:05.560 07:18:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:05.560 07:18:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:05.560 07:18:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:05.560 07:18:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:05.560 07:18:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:05.560 07:18:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:05.560 07:18:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:05.560 07:18:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:05.560 07:18:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:05.560 07:18:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:05.560 07:18:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:05.560 07:18:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:05.560 07:18:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:05.560 07:18:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:05.560 07:18:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:05.560 07:18:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:05.560 07:18:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:05.560 07:18:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:05.560 07:18:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:05.560 07:18:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:05.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:26:05.561 00:26:05.561 --- 10.0.0.2 ping statistics --- 00:26:05.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.561 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:26:05.561 07:18:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:05.561 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:05.561 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:26:05.561 00:26:05.561 --- 10.0.0.3 ping statistics --- 00:26:05.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.561 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:26:05.561 07:18:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:05.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:26:05.561 00:26:05.561 --- 10.0.0.1 ping statistics --- 00:26:05.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.561 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:26:05.561 07:18:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.561 07:18:49 -- nvmf/common.sh@421 -- # return 0 00:26:05.561 07:18:49 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:05.561 07:18:49 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:06.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:06.152 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:06.152 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:06.411 07:18:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.411 07:18:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:06.411 07:18:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:06.411 07:18:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.411 07:18:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:06.411 07:18:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:06.411 07:18:50 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:06.411 07:18:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:06.411 07:18:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:06.411 07:18:50 -- common/autotest_common.sh@10 -- # set +x 00:26:06.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.411 07:18:50 -- nvmf/common.sh@469 -- # nvmfpid=91927 00:26:06.411 07:18:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:06.411 07:18:50 -- nvmf/common.sh@470 -- # waitforlisten 91927 00:26:06.411 07:18:50 -- common/autotest_common.sh@819 -- # '[' -z 91927 ']' 00:26:06.411 07:18:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.411 07:18:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:06.411 07:18:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.411 07:18:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:06.411 07:18:50 -- common/autotest_common.sh@10 -- # set +x 00:26:06.411 [2024-07-11 07:18:50.306690] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:06.411 [2024-07-11 07:18:50.306781] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.411 [2024-07-11 07:18:50.449107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:06.670 [2024-07-11 07:18:50.566343] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:06.670 [2024-07-11 07:18:50.566724] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.670 [2024-07-11 07:18:50.566900] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.670 [2024-07-11 07:18:50.567080] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.670 [2024-07-11 07:18:50.567258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.670 [2024-07-11 07:18:50.567418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.670 [2024-07-11 07:18:50.567580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.670 [2024-07-11 07:18:50.567570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:07.237 07:18:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:07.237 07:18:51 -- common/autotest_common.sh@852 -- # return 0 00:26:07.237 07:18:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:07.237 07:18:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:07.237 07:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:07.496 07:18:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:07.496 07:18:51 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:07.496 07:18:51 -- scripts/common.sh@312 -- # local nvmes 00:26:07.496 07:18:51 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:07.496 07:18:51 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:07.496 07:18:51 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:07.496 07:18:51 -- scripts/common.sh@297 -- # local bdf= 00:26:07.496 07:18:51 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:07.496 07:18:51 -- scripts/common.sh@232 -- # local class 00:26:07.496 07:18:51 -- scripts/common.sh@233 -- # local subclass 00:26:07.496 07:18:51 -- scripts/common.sh@234 -- # local progif 00:26:07.496 07:18:51 -- scripts/common.sh@235 -- # printf %02x 1 00:26:07.496 07:18:51 -- scripts/common.sh@235 -- # class=01 00:26:07.496 07:18:51 -- scripts/common.sh@236 -- # printf %02x 8 00:26:07.496 07:18:51 -- scripts/common.sh@236 -- # subclass=08 00:26:07.496 07:18:51 -- scripts/common.sh@237 -- # printf %02x 2 00:26:07.496 07:18:51 -- scripts/common.sh@237 -- # progif=02 00:26:07.496 07:18:51 -- scripts/common.sh@239 -- # hash lspci 00:26:07.496 07:18:51 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:07.496 07:18:51 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:07.496 07:18:51 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:07.496 07:18:51 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:07.496 07:18:51 -- scripts/common.sh@244 -- # tr -d '"' 00:26:07.496 07:18:51 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:07.496 07:18:51 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:07.496 07:18:51 -- scripts/common.sh@15 -- # local i 00:26:07.496 07:18:51 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:07.496 07:18:51 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:07.496 07:18:51 -- scripts/common.sh@24 -- # return 0 00:26:07.496 07:18:51 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:07.496 07:18:51 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:07.496 07:18:51 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:07.496 07:18:51 -- scripts/common.sh@15 -- # local i 00:26:07.496 07:18:51 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:07.496 07:18:51 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:07.496 07:18:51 -- scripts/common.sh@24 -- # return 0 00:26:07.496 07:18:51 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:07.496 07:18:51 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:07.496 07:18:51 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:07.496 07:18:51 -- scripts/common.sh@322 -- # uname -s 00:26:07.496 07:18:51 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:07.496 07:18:51 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:07.496 07:18:51 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:07.496 07:18:51 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:07.496 07:18:51 -- scripts/common.sh@322 -- # uname -s 00:26:07.496 07:18:51 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:07.496 07:18:51 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:07.496 07:18:51 -- scripts/common.sh@327 -- # (( 2 )) 00:26:07.496 07:18:51 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:07.496 07:18:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:07.496 07:18:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:07.496 07:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:07.496 ************************************ 00:26:07.496 START TEST spdk_target_abort 00:26:07.496 ************************************ 00:26:07.496 07:18:51 -- common/autotest_common.sh@1104 -- # spdk_target 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:07.496 07:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.496 07:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:07.496 spdk_targetn1 00:26:07.496 07:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:07.496 07:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.496 07:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:07.496 [2024-07-11 07:18:51.466673] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.496 07:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:07.496 07:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.496 07:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:07.496 07:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:07.496 07:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.496 07:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:07.496 07:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:07.496 07:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.496 07:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:07.496 [2024-07-11 07:18:51.498880] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.496 07:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:07.496 07:18:51 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:10.779 Initializing NVMe Controllers 00:26:10.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:10.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:10.779 Initialization complete. Launching workers. 00:26:10.779 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10600, failed: 0 00:26:10.779 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1193, failed to submit 9407 00:26:10.779 success 770, unsuccess 423, failed 0 00:26:10.779 07:18:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:10.779 07:18:54 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:14.063 Initializing NVMe Controllers 00:26:14.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:14.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:14.063 Initialization complete. Launching workers. 00:26:14.063 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5950, failed: 0 00:26:14.063 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1210, failed to submit 4740 00:26:14.063 success 265, unsuccess 945, failed 0 00:26:14.063 07:18:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:14.063 07:18:58 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:17.350 Initializing NVMe Controllers 00:26:17.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:17.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:17.350 Initialization complete. Launching workers. 00:26:17.350 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31443, failed: 0 00:26:17.350 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2591, failed to submit 28852 00:26:17.350 success 528, unsuccess 2063, failed 0 00:26:17.350 07:19:01 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:17.350 07:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:17.350 07:19:01 -- common/autotest_common.sh@10 -- # set +x 00:26:17.350 07:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.350 07:19:01 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:17.350 07:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:17.350 07:19:01 -- common/autotest_common.sh@10 -- # set +x 00:26:17.917 07:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.917 07:19:01 -- target/abort_qd_sizes.sh@62 -- # killprocess 91927 00:26:17.917 07:19:01 -- common/autotest_common.sh@926 -- # '[' -z 91927 ']' 00:26:17.917 07:19:01 -- common/autotest_common.sh@930 -- # kill -0 91927 00:26:17.917 07:19:01 -- common/autotest_common.sh@931 -- # uname 00:26:17.918 07:19:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:17.918 07:19:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91927 00:26:17.918 07:19:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:17.918 killing process with pid 91927 00:26:17.918 07:19:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:17.918 07:19:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91927' 00:26:17.918 07:19:01 -- common/autotest_common.sh@945 -- # kill 91927 00:26:17.918 07:19:01 -- common/autotest_common.sh@950 -- # wait 91927 00:26:18.176 00:26:18.176 real 0m10.699s 00:26:18.176 user 0m43.545s 00:26:18.176 sys 0m1.820s 00:26:18.176 07:19:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:18.176 ************************************ 00:26:18.176 END TEST spdk_target_abort 00:26:18.176 ************************************ 00:26:18.176 07:19:02 -- common/autotest_common.sh@10 -- # set +x 00:26:18.176 07:19:02 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:18.176 07:19:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:18.176 07:19:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:18.176 07:19:02 -- common/autotest_common.sh@10 -- # set +x 00:26:18.176 ************************************ 00:26:18.176 START TEST kernel_target_abort 00:26:18.176 ************************************ 00:26:18.176 07:19:02 -- common/autotest_common.sh@1104 -- # kernel_target 00:26:18.176 07:19:02 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:18.176 07:19:02 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:18.176 07:19:02 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:18.176 07:19:02 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:18.176 07:19:02 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:18.176 07:19:02 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:18.176 07:19:02 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:18.176 07:19:02 -- nvmf/common.sh@627 -- # local block nvme 00:26:18.176 07:19:02 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:18.176 07:19:02 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:18.176 07:19:02 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:18.176 07:19:02 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:18.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:18.742 Waiting for block devices as requested 00:26:18.742 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:18.742 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:18.742 07:19:02 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:18.742 07:19:02 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:18.742 07:19:02 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:18.742 07:19:02 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:18.742 07:19:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:19.000 No valid GPT data, bailing 00:26:19.000 07:19:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:19.000 07:19:02 -- scripts/common.sh@393 -- # pt= 00:26:19.000 07:19:02 -- scripts/common.sh@394 -- # return 1 00:26:19.000 07:19:02 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:19.000 07:19:02 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:19.000 07:19:02 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:19.000 07:19:02 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:19.000 07:19:02 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:19.000 07:19:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:19.000 No valid GPT data, bailing 00:26:19.000 07:19:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:19.000 07:19:02 -- scripts/common.sh@393 -- # pt= 00:26:19.000 07:19:02 -- scripts/common.sh@394 -- # return 1 00:26:19.000 07:19:02 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:19.000 07:19:02 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:19.000 07:19:02 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:19.000 07:19:02 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:19.000 07:19:02 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:19.000 07:19:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:19.000 No valid GPT data, bailing 00:26:19.000 07:19:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:19.000 07:19:02 -- scripts/common.sh@393 -- # pt= 00:26:19.000 07:19:02 -- scripts/common.sh@394 -- # return 1 00:26:19.000 07:19:02 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:19.000 07:19:02 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:19.000 07:19:02 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:19.000 07:19:02 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:19.000 07:19:02 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:19.000 07:19:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:19.000 No valid GPT data, bailing 00:26:19.000 07:19:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:19.000 07:19:03 -- scripts/common.sh@393 -- # pt= 00:26:19.001 07:19:03 -- scripts/common.sh@394 -- # return 1 00:26:19.001 07:19:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:19.001 07:19:03 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:19.001 07:19:03 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:19.001 07:19:03 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:19.001 07:19:03 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:19.001 07:19:03 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:19.001 07:19:03 -- nvmf/common.sh@654 -- # echo 1 00:26:19.001 07:19:03 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:19.001 07:19:03 -- nvmf/common.sh@656 -- # echo 1 00:26:19.001 07:19:03 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:19.001 07:19:03 -- nvmf/common.sh@663 -- # echo tcp 00:26:19.001 07:19:03 -- nvmf/common.sh@664 -- # echo 4420 00:26:19.001 07:19:03 -- nvmf/common.sh@665 -- # echo ipv4 00:26:19.001 07:19:03 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:19.259 07:19:03 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4394e380-0dda-4e84-a19e-ee0fd4897b77 --hostid=4394e380-0dda-4e84-a19e-ee0fd4897b77 -a 10.0.0.1 -t tcp -s 4420 00:26:19.259 00:26:19.259 Discovery Log Number of Records 2, Generation counter 2 00:26:19.259 =====Discovery Log Entry 0====== 00:26:19.259 trtype: tcp 00:26:19.259 adrfam: ipv4 00:26:19.259 subtype: current discovery subsystem 00:26:19.259 treq: not specified, sq flow control disable supported 00:26:19.259 portid: 1 00:26:19.259 trsvcid: 4420 00:26:19.259 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:19.259 traddr: 10.0.0.1 00:26:19.259 eflags: none 00:26:19.259 sectype: none 00:26:19.259 =====Discovery Log Entry 1====== 00:26:19.259 trtype: tcp 00:26:19.259 adrfam: ipv4 00:26:19.259 subtype: nvme subsystem 00:26:19.259 treq: not specified, sq flow control disable supported 00:26:19.259 portid: 1 00:26:19.259 trsvcid: 4420 00:26:19.259 subnqn: kernel_target 00:26:19.259 traddr: 10.0.0.1 00:26:19.259 eflags: none 00:26:19.259 sectype: none 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:19.259 07:19:03 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:22.542 Initializing NVMe Controllers 00:26:22.542 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:22.542 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:22.542 Initialization complete. Launching workers. 00:26:22.542 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31708, failed: 0 00:26:22.542 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31708, failed to submit 0 00:26:22.542 success 0, unsuccess 31708, failed 0 00:26:22.542 07:19:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:22.542 07:19:06 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:25.826 Initializing NVMe Controllers 00:26:25.826 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:25.826 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:25.826 Initialization complete. Launching workers. 00:26:25.826 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 66904, failed: 0 00:26:25.826 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26688, failed to submit 40216 00:26:25.826 success 0, unsuccess 26688, failed 0 00:26:25.826 07:19:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:25.826 07:19:09 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:29.107 Initializing NVMe Controllers 00:26:29.107 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:29.107 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:29.107 Initialization complete. Launching workers. 00:26:29.107 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 97363, failed: 0 00:26:29.107 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 24326, failed to submit 73037 00:26:29.107 success 0, unsuccess 24326, failed 0 00:26:29.107 07:19:12 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:29.107 07:19:12 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:29.107 07:19:12 -- nvmf/common.sh@677 -- # echo 0 00:26:29.107 07:19:12 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:29.107 07:19:12 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:29.107 07:19:12 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:29.107 07:19:12 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:29.107 07:19:12 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:29.107 07:19:12 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:29.107 00:26:29.107 real 0m10.530s 00:26:29.107 user 0m5.188s 00:26:29.107 sys 0m2.479s 00:26:29.107 07:19:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.107 07:19:12 -- common/autotest_common.sh@10 -- # set +x 00:26:29.107 ************************************ 00:26:29.107 END TEST kernel_target_abort 00:26:29.107 ************************************ 00:26:29.107 07:19:12 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:29.107 07:19:12 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:29.107 07:19:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:29.107 07:19:12 -- nvmf/common.sh@116 -- # sync 00:26:29.107 07:19:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:29.107 07:19:12 -- nvmf/common.sh@119 -- # set +e 00:26:29.107 07:19:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:29.107 07:19:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:29.107 rmmod nvme_tcp 00:26:29.107 rmmod nvme_fabrics 00:26:29.107 rmmod nvme_keyring 00:26:29.107 07:19:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:29.107 07:19:12 -- nvmf/common.sh@123 -- # set -e 00:26:29.107 07:19:12 -- nvmf/common.sh@124 -- # return 0 00:26:29.107 07:19:12 -- nvmf/common.sh@477 -- # '[' -n 91927 ']' 00:26:29.107 07:19:12 -- nvmf/common.sh@478 -- # killprocess 91927 00:26:29.107 07:19:12 -- common/autotest_common.sh@926 -- # '[' -z 91927 ']' 00:26:29.107 07:19:12 -- common/autotest_common.sh@930 -- # kill -0 91927 00:26:29.107 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (91927) - No such process 00:26:29.107 Process with pid 91927 is not found 00:26:29.107 07:19:12 -- common/autotest_common.sh@953 -- # echo 'Process with pid 91927 is not found' 00:26:29.107 07:19:12 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:29.107 07:19:12 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:29.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:29.672 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:29.672 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:29.672 07:19:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:29.672 07:19:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:29.672 07:19:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:29.672 07:19:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:29.672 07:19:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.672 07:19:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:29.672 07:19:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.672 07:19:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:29.672 00:26:29.672 real 0m24.748s 00:26:29.672 user 0m50.102s 00:26:29.672 sys 0m5.641s 00:26:29.672 07:19:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.672 07:19:13 -- common/autotest_common.sh@10 -- # set +x 00:26:29.672 ************************************ 00:26:29.672 END TEST nvmf_abort_qd_sizes 00:26:29.672 ************************************ 00:26:29.672 07:19:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:29.672 07:19:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:29.672 07:19:13 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:26:29.672 07:19:13 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:26:29.672 07:19:13 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:29.672 07:19:13 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:29.672 07:19:13 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:29.672 07:19:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:29.672 07:19:13 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:29.672 07:19:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:29.672 07:19:13 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:26:29.672 07:19:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:29.672 07:19:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:29.672 07:19:13 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:26:29.672 07:19:13 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:26:29.672 07:19:13 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:26:29.672 07:19:13 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:26:29.672 07:19:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:29.672 07:19:13 -- common/autotest_common.sh@10 -- # set +x 00:26:29.672 07:19:13 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:26:29.672 07:19:13 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:26:29.672 07:19:13 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:26:29.672 07:19:13 -- common/autotest_common.sh@10 -- # set +x 00:26:31.569 INFO: APP EXITING 00:26:31.569 INFO: killing all VMs 00:26:31.569 INFO: killing vhost app 00:26:31.569 INFO: EXIT DONE 00:26:32.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:32.394 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:32.394 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:32.961 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:32.961 Cleaning 00:26:32.961 Removing: /var/run/dpdk/spdk0/config 00:26:32.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:32.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:32.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:32.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:32.961 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:32.961 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:33.220 Removing: /var/run/dpdk/spdk1/config 00:26:33.220 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:33.220 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:33.220 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:33.220 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:33.220 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:33.220 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:33.220 Removing: /var/run/dpdk/spdk2/config 00:26:33.220 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:33.220 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:33.220 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:33.220 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:33.220 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:33.220 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:33.220 Removing: /var/run/dpdk/spdk3/config 00:26:33.220 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:33.220 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:33.220 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:33.220 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:33.220 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:33.220 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:33.220 Removing: /var/run/dpdk/spdk4/config 00:26:33.220 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:33.220 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:33.220 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:33.220 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:33.220 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:33.220 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:33.220 Removing: /dev/shm/nvmf_trace.0 00:26:33.220 Removing: /dev/shm/spdk_tgt_trace.pid55475 00:26:33.220 Removing: /var/run/dpdk/spdk0 00:26:33.220 Removing: /var/run/dpdk/spdk1 00:26:33.220 Removing: /var/run/dpdk/spdk2 00:26:33.220 Removing: /var/run/dpdk/spdk3 00:26:33.220 Removing: /var/run/dpdk/spdk4 00:26:33.220 Removing: /var/run/dpdk/spdk_pid55326 00:26:33.220 Removing: /var/run/dpdk/spdk_pid55475 00:26:33.220 Removing: /var/run/dpdk/spdk_pid55781 00:26:33.220 Removing: /var/run/dpdk/spdk_pid56061 00:26:33.220 Removing: /var/run/dpdk/spdk_pid56244 00:26:33.220 Removing: /var/run/dpdk/spdk_pid56326 00:26:33.220 Removing: /var/run/dpdk/spdk_pid56417 00:26:33.220 Removing: /var/run/dpdk/spdk_pid56511 00:26:33.220 Removing: /var/run/dpdk/spdk_pid56544 00:26:33.220 Removing: /var/run/dpdk/spdk_pid56585 00:26:33.220 Removing: /var/run/dpdk/spdk_pid56640 00:26:33.220 Removing: /var/run/dpdk/spdk_pid56741 00:26:33.220 Removing: /var/run/dpdk/spdk_pid57367 00:26:33.220 Removing: /var/run/dpdk/spdk_pid57431 00:26:33.220 Removing: /var/run/dpdk/spdk_pid57500 00:26:33.220 Removing: /var/run/dpdk/spdk_pid57528 00:26:33.220 Removing: /var/run/dpdk/spdk_pid57626 00:26:33.220 Removing: /var/run/dpdk/spdk_pid57654 00:26:33.220 Removing: /var/run/dpdk/spdk_pid57752 00:26:33.220 Removing: /var/run/dpdk/spdk_pid57781 00:26:33.220 Removing: /var/run/dpdk/spdk_pid57837 00:26:33.220 Removing: /var/run/dpdk/spdk_pid57867 00:26:33.220 Removing: /var/run/dpdk/spdk_pid57914 00:26:33.220 Removing: /var/run/dpdk/spdk_pid57944 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58095 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58125 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58204 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58274 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58298 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58357 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58376 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58411 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58430 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58465 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58484 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58519 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58534 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58573 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58587 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58627 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58641 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58681 00:26:33.220 Removing: /var/run/dpdk/spdk_pid58695 00:26:33.479 Removing: /var/run/dpdk/spdk_pid58735 00:26:33.479 Removing: /var/run/dpdk/spdk_pid58749 00:26:33.479 Removing: /var/run/dpdk/spdk_pid58789 00:26:33.479 Removing: /var/run/dpdk/spdk_pid58803 00:26:33.479 Removing: /var/run/dpdk/spdk_pid58842 00:26:33.479 Removing: /var/run/dpdk/spdk_pid58857 00:26:33.479 Removing: /var/run/dpdk/spdk_pid58892 00:26:33.479 Removing: /var/run/dpdk/spdk_pid58911 00:26:33.479 Removing: /var/run/dpdk/spdk_pid58946 00:26:33.479 Removing: /var/run/dpdk/spdk_pid58965 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59000 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59019 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59054 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59072 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59104 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59124 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59158 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59178 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59212 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59237 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59280 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59297 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59340 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59354 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59394 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59408 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59449 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59518 00:26:33.479 Removing: /var/run/dpdk/spdk_pid59628 00:26:33.479 Removing: /var/run/dpdk/spdk_pid60040 00:26:33.479 Removing: /var/run/dpdk/spdk_pid66756 00:26:33.479 Removing: /var/run/dpdk/spdk_pid67100 00:26:33.479 Removing: /var/run/dpdk/spdk_pid69488 00:26:33.479 Removing: /var/run/dpdk/spdk_pid69867 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70105 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70146 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70406 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70418 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70472 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70531 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70591 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70635 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70637 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70660 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70694 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70706 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70760 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70818 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70878 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70922 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70924 00:26:33.479 Removing: /var/run/dpdk/spdk_pid70955 00:26:33.479 Removing: /var/run/dpdk/spdk_pid71237 00:26:33.479 Removing: /var/run/dpdk/spdk_pid71381 00:26:33.479 Removing: /var/run/dpdk/spdk_pid71643 00:26:33.479 Removing: /var/run/dpdk/spdk_pid71693 00:26:33.479 Removing: /var/run/dpdk/spdk_pid72059 00:26:33.479 Removing: /var/run/dpdk/spdk_pid72588 00:26:33.479 Removing: /var/run/dpdk/spdk_pid73011 00:26:33.479 Removing: /var/run/dpdk/spdk_pid73963 00:26:33.479 Removing: /var/run/dpdk/spdk_pid74923 00:26:33.479 Removing: /var/run/dpdk/spdk_pid75046 00:26:33.479 Removing: /var/run/dpdk/spdk_pid75114 00:26:33.479 Removing: /var/run/dpdk/spdk_pid76556 00:26:33.479 Removing: /var/run/dpdk/spdk_pid76799 00:26:33.479 Removing: /var/run/dpdk/spdk_pid77243 00:26:33.479 Removing: /var/run/dpdk/spdk_pid77353 00:26:33.479 Removing: /var/run/dpdk/spdk_pid77501 00:26:33.479 Removing: /var/run/dpdk/spdk_pid77547 00:26:33.479 Removing: /var/run/dpdk/spdk_pid77587 00:26:33.479 Removing: /var/run/dpdk/spdk_pid77638 00:26:33.479 Removing: /var/run/dpdk/spdk_pid77796 00:26:33.479 Removing: /var/run/dpdk/spdk_pid77943 00:26:33.479 Removing: /var/run/dpdk/spdk_pid78207 00:26:33.479 Removing: /var/run/dpdk/spdk_pid78330 00:26:33.479 Removing: /var/run/dpdk/spdk_pid78743 00:26:33.479 Removing: /var/run/dpdk/spdk_pid79120 00:26:33.479 Removing: /var/run/dpdk/spdk_pid79122 00:26:33.737 Removing: /var/run/dpdk/spdk_pid81361 00:26:33.737 Removing: /var/run/dpdk/spdk_pid81661 00:26:33.737 Removing: /var/run/dpdk/spdk_pid82149 00:26:33.737 Removing: /var/run/dpdk/spdk_pid82151 00:26:33.737 Removing: /var/run/dpdk/spdk_pid82491 00:26:33.737 Removing: /var/run/dpdk/spdk_pid82505 00:26:33.737 Removing: /var/run/dpdk/spdk_pid82525 00:26:33.737 Removing: /var/run/dpdk/spdk_pid82550 00:26:33.737 Removing: /var/run/dpdk/spdk_pid82555 00:26:33.737 Removing: /var/run/dpdk/spdk_pid82699 00:26:33.737 Removing: /var/run/dpdk/spdk_pid82708 00:26:33.737 Removing: /var/run/dpdk/spdk_pid82811 00:26:33.737 Removing: /var/run/dpdk/spdk_pid82817 00:26:33.737 Removing: /var/run/dpdk/spdk_pid82921 00:26:33.737 Removing: /var/run/dpdk/spdk_pid82923 00:26:33.737 Removing: /var/run/dpdk/spdk_pid83408 00:26:33.737 Removing: /var/run/dpdk/spdk_pid83453 00:26:33.737 Removing: /var/run/dpdk/spdk_pid83600 00:26:33.737 Removing: /var/run/dpdk/spdk_pid83720 00:26:33.737 Removing: /var/run/dpdk/spdk_pid84108 00:26:33.737 Removing: /var/run/dpdk/spdk_pid84364 00:26:33.737 Removing: /var/run/dpdk/spdk_pid84844 00:26:33.737 Removing: /var/run/dpdk/spdk_pid85397 00:26:33.737 Removing: /var/run/dpdk/spdk_pid85857 00:26:33.737 Removing: /var/run/dpdk/spdk_pid85943 00:26:33.737 Removing: /var/run/dpdk/spdk_pid86034 00:26:33.737 Removing: /var/run/dpdk/spdk_pid86124 00:26:33.737 Removing: /var/run/dpdk/spdk_pid86280 00:26:33.737 Removing: /var/run/dpdk/spdk_pid86366 00:26:33.737 Removing: /var/run/dpdk/spdk_pid86451 00:26:33.737 Removing: /var/run/dpdk/spdk_pid86549 00:26:33.737 Removing: /var/run/dpdk/spdk_pid86893 00:26:33.737 Removing: /var/run/dpdk/spdk_pid87582 00:26:33.737 Removing: /var/run/dpdk/spdk_pid88939 00:26:33.737 Removing: /var/run/dpdk/spdk_pid89139 00:26:33.737 Removing: /var/run/dpdk/spdk_pid89425 00:26:33.737 Removing: /var/run/dpdk/spdk_pid89723 00:26:33.737 Removing: /var/run/dpdk/spdk_pid90273 00:26:33.737 Removing: /var/run/dpdk/spdk_pid90278 00:26:33.737 Removing: /var/run/dpdk/spdk_pid90648 00:26:33.737 Removing: /var/run/dpdk/spdk_pid90808 00:26:33.737 Removing: /var/run/dpdk/spdk_pid90967 00:26:33.737 Removing: /var/run/dpdk/spdk_pid91064 00:26:33.737 Removing: /var/run/dpdk/spdk_pid91219 00:26:33.737 Removing: /var/run/dpdk/spdk_pid91328 00:26:33.737 Removing: /var/run/dpdk/spdk_pid91996 00:26:33.737 Removing: /var/run/dpdk/spdk_pid92034 00:26:33.737 Removing: /var/run/dpdk/spdk_pid92069 00:26:33.737 Removing: /var/run/dpdk/spdk_pid92318 00:26:33.737 Removing: /var/run/dpdk/spdk_pid92349 00:26:33.737 Removing: /var/run/dpdk/spdk_pid92384 00:26:33.737 Clean 00:26:33.737 killing process with pid 49687 00:26:33.998 killing process with pid 49691 00:26:33.998 07:19:17 -- common/autotest_common.sh@1436 -- # return 0 00:26:33.998 07:19:17 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:26:33.998 07:19:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:33.998 07:19:17 -- common/autotest_common.sh@10 -- # set +x 00:26:33.998 07:19:17 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:26:33.998 07:19:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:33.998 07:19:17 -- common/autotest_common.sh@10 -- # set +x 00:26:33.998 07:19:17 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:33.998 07:19:17 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:33.998 07:19:17 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:33.998 07:19:17 -- spdk/autotest.sh@394 -- # hash lcov 00:26:33.998 07:19:17 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:33.998 07:19:17 -- spdk/autotest.sh@396 -- # hostname 00:26:33.998 07:19:17 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:34.272 geninfo: WARNING: invalid characters removed from testname! 00:26:56.213 07:19:38 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:58.116 07:19:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:00.649 07:19:44 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:02.550 07:19:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:05.082 07:19:48 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:07.623 07:19:51 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:09.520 07:19:53 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:09.520 07:19:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:09.520 07:19:53 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:09.520 07:19:53 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.520 07:19:53 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.520 07:19:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.520 07:19:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.520 07:19:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.520 07:19:53 -- paths/export.sh@5 -- $ export PATH 00:27:09.520 07:19:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.520 07:19:53 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:09.520 07:19:53 -- common/autobuild_common.sh@435 -- $ date +%s 00:27:09.520 07:19:53 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720682393.XXXXXX 00:27:09.520 07:19:53 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720682393.pkSJOz 00:27:09.520 07:19:53 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:27:09.520 07:19:53 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:27:09.520 07:19:53 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:27:09.520 07:19:53 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:09.521 07:19:53 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:09.521 07:19:53 -- common/autobuild_common.sh@451 -- $ get_config_params 00:27:09.521 07:19:53 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:27:09.521 07:19:53 -- common/autotest_common.sh@10 -- $ set +x 00:27:09.778 07:19:53 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:27:09.778 07:19:53 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:09.778 07:19:53 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:09.778 07:19:53 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:09.778 07:19:53 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:09.778 07:19:53 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:09.778 07:19:53 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:09.778 07:19:53 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:09.778 07:19:53 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:09.778 07:19:53 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:09.778 07:19:53 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:09.778 + [[ -n 5125 ]] 00:27:09.778 + sudo kill 5125 00:27:09.787 [Pipeline] } 00:27:09.804 [Pipeline] // timeout 00:27:09.808 [Pipeline] } 00:27:09.824 [Pipeline] // stage 00:27:09.828 [Pipeline] } 00:27:09.845 [Pipeline] // catchError 00:27:09.855 [Pipeline] stage 00:27:09.857 [Pipeline] { (Stop VM) 00:27:09.870 [Pipeline] sh 00:27:10.149 + vagrant halt 00:27:13.431 ==> default: Halting domain... 00:27:20.039 [Pipeline] sh 00:27:20.316 + vagrant destroy -f 00:27:22.849 ==> default: Removing domain... 00:27:23.120 [Pipeline] sh 00:27:23.402 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:27:23.411 [Pipeline] } 00:27:23.430 [Pipeline] // stage 00:27:23.436 [Pipeline] } 00:27:23.454 [Pipeline] // dir 00:27:23.460 [Pipeline] } 00:27:23.479 [Pipeline] // wrap 00:27:23.486 [Pipeline] } 00:27:23.502 [Pipeline] // catchError 00:27:23.512 [Pipeline] stage 00:27:23.515 [Pipeline] { (Epilogue) 00:27:23.530 [Pipeline] sh 00:27:23.811 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:29.092 [Pipeline] catchError 00:27:29.093 [Pipeline] { 00:27:29.104 [Pipeline] sh 00:27:29.382 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:29.640 Artifacts sizes are good 00:27:29.649 [Pipeline] } 00:27:29.667 [Pipeline] // catchError 00:27:29.679 [Pipeline] archiveArtifacts 00:27:29.686 Archiving artifacts 00:27:29.854 [Pipeline] cleanWs 00:27:29.868 [WS-CLEANUP] Deleting project workspace... 00:27:29.868 [WS-CLEANUP] Deferred wipeout is used... 00:27:29.916 [WS-CLEANUP] done 00:27:29.918 [Pipeline] } 00:27:29.937 [Pipeline] // stage 00:27:29.943 [Pipeline] } 00:27:29.959 [Pipeline] // node 00:27:29.965 [Pipeline] End of Pipeline 00:27:30.000 Finished: SUCCESS